<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>MIT Theses</title>
<link>https://hdl.handle.net/1721.1/7582</link>
<description/>
<pubDate>Sat, 14 Mar 2026 07:58:47 GMT</pubDate>
<dc:date>2026-03-14T07:58:47Z</dc:date>
<item>
<title>A design of a low-pressure steam turbine</title>
<link>https://hdl.handle.net/1721.1/165076</link>
<description>A design of a low-pressure steam turbine
Jones, Bradley.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1910
</description>
<pubDate>Sat, 01 Jan 1910 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165076</guid>
<dc:date>1910-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational study of static replication for barrier options</title>
<link>https://hdl.handle.net/1721.1/165075</link>
<description>Computational study of static replication for barrier options
Sun, Hai Po.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1997; Includes bibliographical references (leaves 75-76).
</description>
<pubDate>Wed, 01 Jan 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165075</guid>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling, control and experimentation of a two dimensional linear motor</title>
<link>https://hdl.handle.net/1721.1/165074</link>
<description>Modeling, control and experimentation of a two dimensional linear motor
Castañeda Vega, José Israel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1997; Includes bibliographical references (leaf 118).
</description>
<pubDate>Wed, 01 Jan 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165074</guid>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of anode dimensions in mercury-vapour thermionic rectifiers</title>
<link>https://hdl.handle.net/1721.1/165073</link>
<description>A study of anode dimensions in mercury-vapour thermionic rectifiers
Fussell, Lewis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932; Includes bibliographical references (leaf 50).
</description>
<pubDate>Fri, 01 Jan 1932 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165073</guid>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cloud-chamber study of cosmic ray showers in lead plates</title>
<link>https://hdl.handle.net/1721.1/165072</link>
<description>Cloud-chamber study of cosmic ray showers in lead plates
Fussell, Lewis.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1938; Includes bibliographical references (leaves [113]-[118]).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165072</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a high-speed light source suitable for photoelastic studies</title>
<link>https://hdl.handle.net/1721.1/165071</link>
<description>Development of a high-speed light source suitable for photoelastic studies
Wyle, Frank S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1941; Includes bibliographical references (leaf 25).
</description>
<pubDate>Wed, 01 Jan 1941 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165071</guid>
<dc:date>1941-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boiling and spreading rates of instantaneous liquid methane spills on water</title>
<link>https://hdl.handle.net/1721.1/165070</link>
<description>Boiling and spreading rates of instantaneous liquid methane spills on water
Chatlos, David Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1982; Supervised by Robert C. Reid.; Includes bibliographical references (leaves 86-88).
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165070</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulation and measurement of charge transfer kinetics at chemically modified electrodes</title>
<link>https://hdl.handle.net/1721.1/165069</link>
<description>Manipulation and measurement of charge transfer kinetics at chemically modified electrodes
Lewis, Nathan S.
            (Nathan Saul)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165069</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.</title>
<link>https://hdl.handle.net/1721.1/165068</link>
<description>Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.
Wright, Francine Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1975; Vita.; Bibliography: leaves 65-66.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165068</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Double valves</title>
<link>https://hdl.handle.net/1721.1/165067</link>
<description>Double valves
Faunce, Linus.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165067</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher Siegel-Weil formulae over function fields</title>
<link>https://hdl.handle.net/1721.1/164977</link>
<description>Higher Siegel-Weil formulae over function fields
Mkrtchyan, Mikayel
In their seminal work, Feng-Yun-Zhang introduced function field analogues of Kudla-Rapoport cycles for moduli spaces of unitary shtukas, and initiated the study of their intersection theory. They proved a higher Siegel-Weil formula in the case of non-degenerate Fourier coefficients, relating the degrees of these cycles to higher derivatives of Siegel-Eisenstein series. In this thesis, we generalize their result in two directions: we 1) prove a higher Siegel-Weil formula for unitary groups for corank-1 degenerate coefficients, and 2) introduce analogous cycles on moduli spaces of symplectic shtukas, and prove a higher Siegel-Weil formula for such cycles in the non-degenerate case, relating their degrees to derivatives of Siegel-Eisenstein series on split orthogonal groups.
</description>
<pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164977</guid>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Diverse Array of Synthetic Strategies for Phosphorus Group Transfer Chemistry: From Phosphinidenes to Phosphates</title>
<link>https://hdl.handle.net/1721.1/164976</link>
<description>A Diverse Array of Synthetic Strategies for Phosphorus Group Transfer Chemistry: From Phosphinidenes to Phosphates
Xin, Tiansi
This thesis compiles the published scientific contributions of Tiansi Xin. Chapter 1 consists of a brief collection of eulogies from friends and colleagues, reflecting on his life and time at the Massachusetts Institute of Technology. The subsequent chapters describe the development of novel synthetic methods for the transfer of phosphorus-containing moieties, specifically metaphosphates and phosphinidenes. The work presented here has significant implications for both the fundamental understanding and practical advancement of synthetic inorganic and organic chemistry. Chapters 2 and 3 address the sustainable production and processing of phosphoruscontaining chemicals, focusing on mechanochemical methods to synthesize reduced phosphorus species while circumventing the need to access hazardous white phosphorus as an intermediate. In particular, Chapter 2 describes a solvent-free mechanochemical approach to producing phosphite (HPO₃²⁻) via hydride-mediated reduction of condensed phosphates. Using potassium hydride, a range of inorganic phosphate sources—including pyrophosphate, triphosphate, trimetaphosphate, fluorophosphate, and polyphosphate—were converted to phosphite in moderate to high yields. Mechanistic studies identified overreduction pathways leading to hypophosphite and other low-oxidation P-species. Chapter 3 similarly applies this mechanochemical approach to phosphorus–carbon bond formation, reporting the phosphorylation of acetylides with condensed phosphates to afford phosphonates. Biogenic polyphosphates were also shown to be viable precursors, a proof-of-concept to closing the modern phosphorus cycle using recycled inputs. These results demonstrate the possibility of accessing organophosphorus chemicals directly from condensed phosphates and may offer an opportunity toward a “greener” phosphorus industry. Chapters 4 and 5 shift focus to phosphinidene transfer chemistry and the synthesis of novel phosphorus-containing heterocycles. This expands on previously published studies from the Cummins group on the chemistry of dibenzo-7-phosphanorbornadiene “RPA” reagents. Chapter 4 reports the preparation and structural characterization of iron–phosphido complexes relevant to phosphinidene group transfer catalysis and describes the development of an improved catalytic system based on a simple diiron precursor (Fp₂), enabling efficient synthesis of phosphiranes from electron-deficient alkenes. The mechanism was thoroughly experimentally and computationally interrogated. Chapter 5 describes the novel synthesis of free, uncomplexed phosphet-2-ones via phosphinidene transfer to cyclopropenones, with experimental and theoretical studies supporting a mechanism involving ketene-derived intermediates and transformations to additional phosphorus heterocycles through subsequent reactions.
</description>
<pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164976</guid>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geology of Deception Gulch and the Verde Central mine</title>
<link>https://hdl.handle.net/1721.1/164923</link>
<description>Geology of Deception Gulch and the Verde Central mine
Benedict, P. C.
            (Platt Carrico),
            1900-1969.
Thesis: M.S., Massachusetts Institute of Technology, Department of Geology and Geophysics, 1923
</description>
<pubDate>Mon, 01 Jan 1923 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164923</guid>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural geology of Eastern Massachusetts</title>
<link>https://hdl.handle.net/1721.1/164922</link>
<description>Structural geology of Eastern Massachusetts
Ilsley, Ralph,
            1896-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Geology, 1934; Vita.
</description>
<pubDate>Mon, 01 Jan 1934 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164922</guid>
<dc:date>1934-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design of a hydraulic draft gear for railway passenger cars</title>
<link>https://hdl.handle.net/1721.1/164921</link>
<description>The design of a hydraulic draft gear for railway passenger cars
Pearson, Harry L.; McGrady, Charles T.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1922; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1922 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164921</guid>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of "Chu-Ma" as a textile fiber</title>
<link>https://hdl.handle.net/1721.1/164920</link>
<description>A study of "Chu-Ma" as a textile fiber
Chou, Cheng Yu,
            1901-; Hsueh, Tsu Kang.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1924
</description>
<pubDate>Tue, 01 Jan 1924 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164920</guid>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The transportation decision making process in metropolitan Boston</title>
<link>https://hdl.handle.net/1721.1/164919</link>
<description>The transportation decision making process in metropolitan Boston
Zinner, Richard Mark.
Thesis: B.S., Massachusetts Institute of Technology, Department of Political Science, 1967; One unnumbered page inserted.; Bibliography: leaf 74.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164919</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oscillographic presentation of impedances on the reflection-coefficient plane</title>
<link>https://hdl.handle.net/1721.1/164918</link>
<description>Oscillographic presentation of impedances on the reflection-coefficient plane
Eckhart, Myron.; Fowler, Earl Bealle.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1949
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164918</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiation transfer in massive binary x-ray systems</title>
<link>https://hdl.handle.net/1721.1/164917</link>
<description>Radiation transfer in massive binary x-ray systems
Lewis, Wayne Lloyd.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1991; Includes bibliographical references (leaves 167-173).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164917</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laser induced photoionization of helium</title>
<link>https://hdl.handle.net/1721.1/164916</link>
<description>Laser induced photoionization of helium
Lewis, Wayne Lloyd.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164916</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear elastic analysis of reinforced concrete structures by the finite element method</title>
<link>https://hdl.handle.net/1721.1/164915</link>
<description>Nonlinear elastic analysis of reinforced concrete structures by the finite element method
Tulga, Said Şahin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1979; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164915</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Subcontractor bidding strategy</title>
<link>https://hdl.handle.net/1721.1/164914</link>
<description>Subcontractor bidding strategy
Gilbane, Thomas Freeman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1975; Bibliography: leaves 104-105.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164914</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Petrography and geology of the Shoshone mining region in northwestern Wyoming</title>
<link>https://hdl.handle.net/1721.1/164913</link>
<description>Petrography and geology of the Shoshone mining region in northwestern Wyoming
Benedict, P. C.
            (Platt Carrico),
            1900-1969.
Thesis: B.S., Massachusetts Institute of Technology, Department of Geology, 1922
</description>
<pubDate>Sun, 01 Jan 1922 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164913</guid>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Interaction Between Hydraulic and Natural Fractures in Shale Rocks</title>
<link>https://hdl.handle.net/1721.1/164863</link>
<description>Mechanisms of Interaction Between Hydraulic and Natural Fractures in Shale Rocks
Arzuaga García, Ignacio Martín
Understanding the interaction between hydraulically induced fractures and pre-existing natural fractures in geologic formations is key for optimizing subsurface energy systems that rely on fluid injection into fractured rocks. These include Enhanced Geothermal Systems (EGS), CO₂ sequestration, hydrogen storage in depleted reservoirs, unconventional oil and gas development in shale formations, and nuclear waste disposal, among others. In all these applications, controlling fracture propagation and interaction is essential for ensuring operational efficiency, safety, and long-term integrity of the system. This thesis presents a comprehensive experimental and theoretical investigation of hydraulic fracture (HF) interactions with natural fractures (NFs), using Opalinus Clayshale as a representative anisotropic material.&#13;
&#13;
The experimental work involved a series of hydraulic fracturing tests on Opalinus Clayshale specimens under controlled quasi-true-triaxial stress conditions, comparing normal and dried states. Novel monitoring techniques, including high-resolution imaging, high-speed video, acoustic emissions (AE), and pressure tracking, were employed to capture the fracturing process in real-time. Three dominant interaction modes (Crossing, Arrest, and Opening) were systematically characterized and linked to key parameters, including stress ratio, fracture geometry, and injection rates. A critical stress ratio (σ₁/σ₃) of approximately 20 was identified as the threshold for achieving fracture crossing under our experimental conditions: cohesionless, “open” natural fractures, with a low viscosity injection fluid, in a toughness-dominated regime. In dried specimens, high flaw pressurization rates were necessary to overcome matrix fluid loss and achieve crossing.&#13;
&#13;
To complement and interpret the experimental results, existing theoretical models were reviewed and implemented. Furthermore, a simplified version of the OpenT model (Chuprakov et al., 2014) was developed and applied for Opalinus Clayshale, incorporating stress, energy, friction, and permeability effects. By integrating laboratory results with theoretical frameworks, this thesis offers an integral approach to predictive understanding of fracture propagation in naturally fractured rocks, stating that not only the characteristics of the discontinuity or the far-field stresses involved in the process are important in determining the mechanism of interaction, but also the dynamic energy balance at the fracture tip, which is influenced by injection rate, fluid viscosity, and discontinuity properties.&#13;
&#13;
Overall, this thesis bridges the gap between laboratory experiments and theoretical models, advancing a more comprehensive understanding of fracture propagation in naturally fractured media. The findings highlight the importance of considering both mechanical and hydraulic parameters, particularly in low-viscosity, toughness-dominated regimes, for accurately predicting fracture behavior.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164863</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Satellite Drag and Sustainable Space Operations in a Dynamic Thermosphere</title>
<link>https://hdl.handle.net/1721.1/164862</link>
<description>Satellite Drag and Sustainable Space Operations in a Dynamic Thermosphere
Parker, William E.
Earth’s orbit has become increasingly congested and contested in recent years. The surge in launched payloads, combined with satellite failures, explosions, and collisions, has contributed to a large and growing population of orbital debris objects that can remain in orbit for decades, centuries, or longer. Meanwhile, decreasing launch costs and maturing satellite technology have created conditions favorable for rapid commercialization across orbital regimes, especially in low Earth orbit (LEO). Today, a small number of commercial entities operate the large majority of the world’s active satellites as part of proliferated LEO constellations. Sustaining productive activity in an increasingly crowded orbital environment has made satellite conjunction assessment and collision avoidance essential for safe operations. These efforts require not just accurate trajectory predictions, but also credible estimates of uncertainty. In LEO, variability in atmospheric drag is by far the dominant source of propagation error, often leading to deviations of several kilometers per day due to unpredictable solar and geomagnetic activity. Even over short timescales, trajectory prediction is challenging because existing forecasts exhibit limited predictive skill. Although forecast errors are often non-Gaussian and heteroscedastic, operational products are generally presented as deterministic, and atmospheric models rarely provide rigorous uncertainty characterization. This work introduces a new approach for probabilistic satellite drag modeling based on historical correlations between space weather drivers and satellite dynamics. Unlike traditional methods, it models satellite behavior directly without reconstructing thermospheric mass density or requiring detailed knowledge of satellite properties such as the ballistic coefficient. This end-to-end strategy offers substantial computational and operational advantages for many space domain awareness tasks. Capturing both trajectory predictions and their associated uncertainty is critical for enabling informed collision avoidance decisions, particularly during geomagnetic storms when current infrastructure frequently fails. Because the orbital lifetime of debris objects can exceed hundreds of years, population dynamics in space critically depend on long-term variability in the composition of Earth’s thermosphere. Rising concentrations of carbon dioxide and other greenhouse gases have caused warming in the troposphere but cooling and contraction in the upper atmosphere. This contraction decreases atmospheric density in LEO, reducing drag and extending the orbital lifetime of debris objects. Longer-lived debris populations pose a persistent collision hazard for all active satellites as long as they remain in orbit. Even natural events, such as a prolonged grand solar minimum, could further reduce thermospheric density and contribute to longer debris lifetime in LEO. With little ability to predict such an event, it is necessary to understand the potential consequences and to identify strategies that enable the continued safe and productive use of LEO. This work models the impact of such long-term environmental changes on limits for sustainable satellite deployments. LEO is a finite respource increasingly at risk of overexploitation. Conserving it and sharing it fairly requires that we first understand its fundamental capacity and our current occupation of that capacity. Some metrics have been proposed to measure the satellite carrying capacity of Earth’s orbit, but none have previously accounted for the potential influence of a changing space climate. This work develops new methods for defining carrying capacity as a common currency, enabling clear constraint-driven thresholds on activity and a better understanding of how existing and proposed missions consume available capacity. These new metrics provide insight into how environmental variability may affect the long-term sustainability of operations in LEO. Respecting and understanding this influence that the natural environment has on our collective ability to operate spacecraft in LEO is critical to preventing the overexploitation of this regime and protecting it for future generations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164862</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Large Language Models as Circuit Design Assistants</title>
<link>https://hdl.handle.net/1721.1/164861</link>
<description>Evaluating Large Language Models as Circuit Design Assistants
Cox, Matthew J.
Large language models (LLMs) have exploded in capability in recent years. Previous attempts at AI systems for circuit design have had limited proficiency and been restricted in problem scope. LLMs, with their breadth of knowledge and reasoning ability, are a promising technology for a much more general-purpose circuit design assistant. We developed a dataset of electrical engineering problems and solutions with which to test an LLM-based system, since no such publicly available dataset exists to our knowledge; unmodified GPT-4 was able to solve 42% of the problems. We did a preliminary comparison of several knowledge bases to use for RAG knowledge injection, finding that a small, curated set of resources performed better than a larger, less-focused set of resources, though there were confounding factors which may have skewed the result. While this work is a start, significant future work is needed to continue developing an LLM-based circuit design assistant.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164861</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration</title>
<link>https://hdl.handle.net/1721.1/164860</link>
<description>Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration
Nguyen, Gary
Code coverage is a longstanding metric for evaluating how thoroughly a program has been tested. Achieving high coverage remains a priority goal for quality assurance and software stability. Exhaustive enumeration of possible input paths to every code region is desirable in theory but computationally infeasible in practice, especially in large-scale codebases. Fuzzing is a widely used technique for input generation and is effective at exploring smaller programs but often struggles with more complex conditional logic and nested modules. Concolic execution, which exhaustively explores paths using constraint solving, can work effectively with complex conditional logic but suffers from path explosion. Targeted branch exploration is a similar approach for input generation but sidesteps the path explosion problem by focusing more on specific constraint paths of interest.&#13;
&#13;
In this thesis, I introduce a hybrid system that combines fuzzing and targeted branch exploration with the goal of improving code coverage by leveraging the complementary strengths of each. The system uses fuzzing to quickly generate a broad input corpus and follows up with targeted branch exploration to explore paths that fuzzing struggles to reach. Findings from experiments on two C projects of different complexities show that the system did not outperform the individual techniques in terms of raw coverage, revealing limitations of the approach and opportunities for future improvement.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164860</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Financial Economics and Econometrics</title>
<link>https://hdl.handle.net/1721.1/164859</link>
<description>Essays in Financial Economics and Econometrics
Orestes, Victor M.
This thesis comprises three essays in finance and econometrics. The first two essays focus on the role of credit access and liquidity in shaping real firm outcomes. The first essay examines the transmission of modern monetary policy through corporate asset markets. Exploiting quasi-experimental variation in the Central Bank of Brazil’s collateral framework and implementing a novel dynamic regression discontinuity design, it shows that monetary policy can ease expected future borrowing constraints, reduce firms’ precautionary cash holdings, and stimulate employment. The second essay analyzes how receivables financing through factoring helps firms smooth cash flows. Using a shift-share instrument and matched administrative data, it finds that cheaper liquidity leads firms to rely more on permanent labor. The third essay develops a new method for distributional inference—nonparametric quantile mixture models. This framework can be applied to financial settings such as tail risk estimation and density forecasting, as well as to causal inference when the objective is to estimate the distributional effects of interventions. It is used here to quantify the heterogeneous wage effects of a major environmental disaster.&#13;
&#13;
The first chapter (joint with Luis Alvarez and Thiago Christiano Silva) studies how modern monetary policy tools, which increasingly operate through corporate asset markets, affect real firm outcomes. We exploit quasi-experimental variation from the inclusion of specific corporate debt instruments in the Central Bank of Brazil’s collateral framework and implement a novel dynamic regression discontinuity design. We find that eligibility increases firms’ debt issuance, modestly lowers spreads, and reduces cash holdings, reflecting a decline in precautionary savings. These effects translate into higher employment and greater supply chain liquidity. We interpret the mechanism through the lens of segmented financial markets: by relaxing firms’ expected future borrowing constraints, the policy acts as a persistent borrowing subsidy and liquidity injection. This encourages firms to reduce cash hoarding and expand production. Using a semi-structural framework calibrated to our reduced-form estimates, we find that an induced 0.8% borrowing subsidy leads to a 1% increase in debt issuance, a 0.2% reduction in cash holdings, and a 0.4% increase in the wage bill.&#13;
&#13;
The second chapter (joint with Thiago Christiano Silva and Henry Zhang) &#13;
shows that firms experience large increases in sales and purchases after receiving cheaper liquidity. We focus on factoring, defined as the supplier-initiated sale of receivables. In Brazil, receivables funds (FIDCs) securitize receivables for institutional investors. By assembling a novel transaction-level dataset of factoring with other credit operations for all registered firms and FIDCs, we construct a shift-share instrument for factoring financing supply based on FIDC flows. We then use a novel combination of electronic payments, trade credit, and employer-employee matched data to estimate the impacts. A flow-induced increase in receivables demand reduces firms’ factoring interest rate. In response, firms demand more permanent labor and less temporary labor. In our model, these effects arise from factoring’s purpose of reducing cash inflow volatility, helping firms match inflows to outflows, which firms otherwise achieve at an efficiency cost through substitution across labor types.&#13;
&#13;
The third chapter (joint with Luis Alvarez) introduces nonparametric quantile mixture models as a computationally convenient and flexible alternative to standard nonparametric density mixtures, which are widely used in Statistics and Econometrics but face significant computational and inferential challenges. We propose a sieve estimator based on a generalized method of L-moments and develop a full inferential theory. In doing so, we contribute to the statistical literature by extending a numerical bootstrap method to high-dimensional settings. As a direct application of our theory, we provide the first inference method for the distributional synthetic controls of Gunsilius (2023), a novel tool for counterfactual analysis that previously lacked formal inference procedures. We apply this method to evaluate the effects of the Brumadinho dam collapse—a large-scale environmental disaster—on the local wage distribution. The results reveal substantial heterogeneity across the distribution, with evidence of displacement effects in which median-paying jobs are replaced by lower-wage contracts.&#13;
JEL Codes: C1, E4, E5, G2, G3
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164859</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry</title>
<link>https://hdl.handle.net/1721.1/164858</link>
<description>Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry
He, Wenhao
Quantum simulations of electronic structure promise to deliver significant speedups over classical methods, but remain limited by the number of qubits on near-term devices. A key strategy to reduce quantum resource requirements is to truncate the molecular Hilbert space via compact and efficient basis sets. However, most optimized basis sets either rely on predefined heuristics or require expensive classical computations, such as CASSCF orbital optimization or ℓ1-norm minimization of the Hamiltonian. In this work, we introduce a general machine learning framework for fast basis set prediction in quantum computational chemistry. Our method employs an equivariant graph neural network that outputs a Hermitian matrix encoding optimized molecular orbitals. The eigenvectors of this matrix define a transferable and efficient basis set, trained on orbitals obtained via CASSCF and Hamiltonian ℓ1 norm optimization. We evaluate our model on hydrogen chains and demonstrate that the predicted bases achieve energy accuracy and Hamiltonian sparsity comparable to orbital-optimized methods, while reducing classical preprocessing time. In addition, the predicted orbitals can be directly used as high-quality initial guesses for CASSCF calculations, further accelerating their convergence.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164858</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Signaling at the Tumor-Immune Interface in Glioblastoma</title>
<link>https://hdl.handle.net/1721.1/164857</link>
<description>Signaling at the Tumor-Immune Interface in Glioblastoma
D'Souza, Alicia D.
Glioblastoma (GBM) is a devastating brain cancer, and the standard of care has not changed in over 20 years. GBM tumors are composed of a milieu of cancer cells and innate immune cells, which are co-opted by the cancer cells to promote an anti-inflammatory environment. Despite tremendous success in immunotherapy in several cancers over the past 10 years, immunotherapies have failed to show efficacy in GBM. A systems biology approach to characterizing temporal changes in tumor-immune interface of glioblastoma could illuminate new strategies to activate an anti-tumor immune response by examining changes in cell signaling and antigen presentation.&#13;
&#13;
In the first part of my thesis, I investigated how macrophages alter their phenotype in response to tumor co-culture and how these changes are reflected at the level of the phosphoproteome. To characterize signaling changes in distinct cell populations during co-culture, I developed a method to preserve and analyze cell-type-specific signaling using fixation. This approach enables phosphoproteomic profiling of two interacting cell types, capturing dynamic signaling events with cell-type resolution. I applied this method to study co-cultures of glioblastoma (GBM) cells and primary human macrophages. When cultured together, GBM cells induced an anti-inflammatory, immunosuppressive phenotype in macrophages, mirroring features observed in the glioblastoma tumor microenvironment. Phosphoproteomic analysis revealed that this phenotypic shift was accompanied by distinct signaling alterations in macrophages, including the upregulation of ABL kinase activity. To test this finding, I treated macrophages with an ABL kinase inhibitor and observed a reduction in the anti-inflammatory phenotype, suggesting that ABL signaling plays a role in supporting immunosuppressive macrophage polarization. Furthermore, in a mouse model of GBM, treatment with an ABL kinase inhibitor led to a reduction in the abundance of anti-inflammatory macrophages within the tumor and was associated with a modest extension of survival.&#13;
&#13;
In the second part, I examined changes in antigen presentation and signaling in glioblastoma tumors in response to treatment with an oncolytic virus (OV). In patient derived tumor (PDX) models in mice, mice treated with OV have increased antigen presentation, pointing to the use of OV therapy to reshape the tumor micro-environment to a more inflammatory state. Finally, tissue obtained from serial biopsies of GBM patients treated with OV shows an increase in antigen presentation and both Class I and Class II MHC protein expression. We also observed an increase in interferon alpha and interferon gamma signaling pathways as well as early induction of apoptotic pathways. These findings highlight the role of therapeutics in altering the tumor microenvironment and potentially priming it for combination immunotherapies. This thesis explores the dynamic nature of the tumor and immune compartments in glioblastoma and underscores how therapies can act on the immune compartment to promote anti-tumor activity.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164857</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition</title>
<link>https://hdl.handle.net/1721.1/164856</link>
<description>SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition
Feng, Dewei
The ability of AI to sense and identify various substances based on their smell alone can have profound impacts on allergen detection (e.g., smelling gluten or peanuts in a cake), monitoring the manufacturing process, and sensing hormones that indicate emotional states, stress levels, and diseases. Despite these broad impacts, there are virtually no large-scale benchmarks, and therefore little progress, for training and evaluating AI systems’ ability to smell in the real world. In this paper, we use portable gas and chemical sensors to create SmellNet, the first large-scale database that digitizes a diverse range of smells in the natural world. SmellNet contains about 180,000 time steps of 50 substances (spanning nuts, spices, herbs, fruits, and vegetables) with 50 hours of data. Using SmellNet, we trained AI models for real-time classification of substances based on their smell alone. Our best methods leverage sequence models, contrastive learning to integrate high-resolution Gas Chromatography–Mass Spectrometry molecular data, and a new temporal difference method that identifies sharp changes in sensor readings. Our best models achieve up to 65.35% accuracy on pre-recorded data, and generalize to real-world conditions with 10.71% accuracy on nuts and 25.38% on spices in the challenging 50-way online classification task. Despite these promising results, SmellNet highlights many technical challenges in building AI for smell, including richer feature learning, on-edge smell models, and robustness to environmental changes.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164856</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization</title>
<link>https://hdl.handle.net/1721.1/164855</link>
<description>Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization
Meindl, Jamison Chivvis
Global optimization of expensive, derivative-free black-box functions requires extreme sample efficiency. While Bayesian optimization (BO) is the current state-of-the-art, its performance hinges on surrogate and acquisition function hyperparameters that are often hand-tuned and fail to generalize across problem landscapes. We present ZeroShotOpt, the first general-purpose, pretrained model for continuous black-box optimization tasks ranging from 2 D to 20 D. Our approach leverages offline reinforcement learning on large-scale optimization trajectories collected from 12 BO variants. To scale pretraining, we generate millions of synthetic Gaussian process-based functions with diverse landscapes, enabling the model to learn transferable optimization policies. As a result, ZeroShotOpt achieves robust zero-shot generalization on a wide array of unseen synthetic and real-world benchmarks, matching or surpassing the sample efficiency of leading global optimizers, including BO, while also offering a reusable foundation for future extensions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164855</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes</title>
<link>https://hdl.handle.net/1721.1/164854</link>
<description>Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes
Nguyen, Thienan D.
Colloidal quantum dot light emitting diodes reveal to be promising candidates for the next generation of display technologies. Their brighter emissions, greater color purity, and higher efficiency make them highly desirable in consumer electronics. As such, research into the performance and stability of these novel LEDs are crucial for their operation in displays. These investigations are ongoing, with focused efforts on improving the operating stability through different quantum dot materials and passivation methods. However, less attention is paid in confidently understanding the fundamental relationships between current, voltage, and luminance by which these devices operate. These electrical characteristics reveal insights into the operation of these devices and the behavior of charge carriers. Additionally, temperature-dependent electrical measurements can showcase different behavior at different temperatures and deviations from the expected performance at set temperatures. Temperature dependent processes are revealed and from such, a better understanding of how the device operates is gained. In this thesis, an investigation into the temperature-dependent electrical characteristics of quantum dot light emitting diodes was conducted by measuring the current-voltage-luminance, JVL, relationships at various cryogenic temperatures. These temperatures ranged from 78K, liquid nitrogen boiling point, to 293K, room temperature. This investigation revealed the temperature dependent nature and origin of turn-on voltage, current, EQE, EQE roll-off, and hysteresis.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164854</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering</title>
<link>https://hdl.handle.net/1721.1/164853</link>
<description>Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering
Rich, Benjamin R.
Knowledge Graph Question Answering (KGQA) encompasses a set of techniques aimed at generating accurate, interpretable responses to natural language queries posed over structured, graph-based datasets. Recent approaches to KGQA involve reducing the knowledge graph (KG) to a relevant subgraph, which is then encoded in natural language as a series of triples (subject, predicate, object) and passed to a large language model (LLM) for interpretation and answer generation. These methods have shown state-of-the-art accuracy. However, this paradigm is undermined by a critical vulnerability: the retrieval of irrelevant or erroneous facts can amplify LLM hallucinations and degrade system trustworthiness, while the reasoning process remains opaque. This thesis addresses this challenge by extending an existing stateof-the-art KGQA architecture with uncertainty-aware subgraph retrieval methods. To achieve this, we modify the retrieval component to learn the epistemic uncertainty of each candidate triple’s relevance to a given query. We implement these modifications using Bayesian methods and learn a well-calibrated approximation of the posterior distribution over triple relevance. By explicitly modeling this uncertainty, the retriever model is shown to provide a fine-grained confidence score for each piece of evidence. We expose these metrics downstream to the LLM during reasoning and evaluate whether LLMs can reason over uncertainty-related metrics to improve KGQA. We find that LLMs cannot reason effectively over uncertainties in most cases, but that agentic workflows that provide selective access to uncertainty metrics may enhance performance. We evaluate our approach against established benchmarks using HIT-rate and set-comparison accuracy metrics. Additionally, we introduce reasoning-path and statistical trust metrics derived from calibrated uncertainty scores. Our analysis reveals a significant positive correlation between path-based uncertainty metrics and the veracity of the Large Language Model’s (LLM) answers. These findings establish a robust foundation for developing uncertainty-grounded trust mechanisms in LLM-agnostic KGQA systems. As a proof of concept, a lightweight classifier trained exclusively on the LLM’s inputs and outputs demonstrates substantial predictive power in identifying correct responses. Finally, we briefly explore using uncertainty to identify out-of-distribution (OOD) queries.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164853</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applied Compiler Optimizations for Proving Code</title>
<link>https://hdl.handle.net/1721.1/164852</link>
<description>Applied Compiler Optimizations for Proving Code
Ruiz, Ricardo
The recent popularity of massively distributed, trustless systems has created a demand for cryptographic proofs: systems to prove that a piece of data is a valid output for a given program. These systems exist, but face very high runtimes for the generation of proofs. Significant effort has been invested in optimizing the prover systems, but relatively less has been focused on optimizing the code that gets read as an input. This paper proposes a new approach to optimizing prover systems by modifying the compiler to produce proof-ready code. It proposes a benchmarking framework for comparing the relative proof costs of RISC-V instructions; the resulting analyis find that shift instructions do not offer heavy savings over multiplication. The finding suggests that strength reduction, a fundamental optimization in modern compilers, can sabotage end-to-end performance. The paper proposes methods for applying this knowledge to better optimize code, leaving the door open for future researchers to continue to make code proofs more performant and accessible.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164852</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery</title>
<link>https://hdl.handle.net/1721.1/164850</link>
<description>Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery
Xie, Yuxin
Adeno-associated viruses (AAV) are one of the most promising vectors for gene therapy because of their established safety, low immunogenicity, and capability to achieve sustained gene expression. However, many naturally occurring AAV variants have limitations in their potency, particularly in penetrating biological barriers like the blood-brain barrier (BBB). Additionally, their broad and nonspecific tropism can translate into suboptimal cross-species transduction efficiency and potential toxicity, complicating the clinical transition from animal model to humans. These challenges impede the use of naturally occurring AAVs for therapeutic gene delivery in many neurological disorders-such as autism spectrum disorders (ASD), Parkinson’s disease (PD), Huntington’s disease (HD)—as well as other systemic conditions like cystic fibrosis (CF). To overcome these barriers, we developed a computational framework based on ancestral sequence reconstruction (ASR) to engineer synthetic ancestral AAV capsids with the goal of enhanced targeting specificity and potency. We first validated this computational framework by replicating the previously engineered Anc80L65 capsid. Then, with 75 naturally occurring functional AAV sequences and additional experimentally screened variants exhibiting brain-targeting potency, we built an evolutionary framework. We applied multiple computational methods such as enhanced multiple sequence alignment, maximum-likelihood-based phylogenetic tree inference, and ancestral sequence reconstruction with Bayesian inference. With this methodology, we predicted several novel ancestral AAV capsid sequences at critical evolutionary nodes, particularly those representing functional transitions with potential improved blood-brain barrier penetration and CNS tropism. Our computational framework thus streamlines and accelerates the process of designing ancestral AAV variants with targeted gene therapy applications.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164850</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intercellular flow-mediated force relaxation measurement on the three-dimensional multicellular tissue</title>
<link>https://hdl.handle.net/1721.1/164849</link>
<description>Intercellular flow-mediated force relaxation measurement on the three-dimensional multicellular tissue
Liu, Fan
Three-dimensional (3D) multicellular tissues are prevailing over 2D monolayer or single cells; their mechanical properties like stiffness, surface tension, and viscosity have been shown to relate to diseases like fibrosis or tumor metastasis. Multicellular tissues have been traditionally modeled as a viscoelastic material due to their apparent shape rearrangement, which hardly considers the internal structure, including the extracellular matrix (ECM) and resulting intercellular water flow. These intercellular communications usually provide significant information on diseases such as tumor invasion, but immediate supporting evidence of this behavior is lacking. In this work, we investigate the bulk response of 3D multicellular tissues due to such intercellular flows and explore the related mechanism through a tailored micro-mechanics platform. &#13;
Firstly, we design and establish a micro-mechanics platform based on the parallel plate compression (PPC) method. We adopt a precise micro-balance as the sensor to detect the force variation of the sample during compression. A piezo linear stage is incorporated to exert such tiny vertical displacement. Besides, a lateral microscope is designed to monitor the compression process instantaneously. This platform has proved to be applicable to various samples, including hydrogels, cell spheroids, and natural tissues or organs. &#13;
Then, we propose the critical criterion, the size dependency of force relaxation time, to distinguish a material's properties, i.e., viscoelasticity and poroelasticity. For poroelastic material, the force relaxation is due to water redistribution; hence, the speed highly depends on the sample sizes. In contrast, for viscoelastic material, it is determined by the bulk material properties, thus independent of the size. We theoretically verify this criterion via Abaqus simulation and experimentally on classic poro-/visco-elastic materials with various dimensions. &#13;
Next, we apply the size-dependency criterion on the 3D multicellular tissues to distinguish the poro-/visco-elasticity in this biomaterial. We take the PPC on multiple cell spheroids with different sizes through the platform. It is observed that the force relaxation times are linearly proportional to the size of all tested cell lines, demonstrating poroelasticity in our experimental time range. Intriguingly, we take tests on the natural organs of the mouse islets and find such linear correlation as well. Hence, both cultured spheroids and natural tissues are poroelastic.&#13;
Finally, we explore the mechanism determining the poroelasticity inside the 3D multicellular tissues. By inhibiting the cell-cell junctions, we demonstrate the intercellular water flow through the extracellular gaps dominates this poroelastic force relaxation in the biomaterial. Further experiments show that the stiffness of the structure and the extracellular gaps inside the 3D multicellular tissues couple to contribute to the intercellular water flow, i.e., the stiffer the structure and/or the larger the gaps, the faster the water flows, thus quicker the force decays after compression.&#13;
These findings highlight the fundamental role of intercellular water flow in the mechanical properties of 3D multicellular tissues. The designed micro-mechanics platform is also beneficial to research at the tissue level with micro-newton forces owing to the development of artificial organoids for early disease diagnosis and treatment.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164849</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Unprecedented Extreme Scenarios with Limited Data</title>
<link>https://hdl.handle.net/1721.1/164848</link>
<description>Generating Unprecedented Extreme Scenarios with Limited Data
Chang, Kai
Quantifying and predicting rare and extreme events persists as a crucial yet challenging task in understanding complex dynamical systems, ubiquitous in science and engineering. Many practical challenges arise from the infrequency and severity of these events, including the considerable variance of simple sampling methods and the substantial computational cost of high-fidelity numerical simulations. Numerous data-driven methods have recently been developed to tackle these challenges. However, a typical assumption for the success of these methods is the occurrence of multiple extreme events, either within the training dataset or during the sampling process. This leads to accurate models in regions of quiescent events but with high epistemic uncertainty in regions associated with extremes. To overcome this limitation, we introduce the framework of Extreme Event Aware (e2a or eta) or η-learning which does not assume the existence of extreme events in the available data. η-learning reduces the uncertainty even in ‘unchartered’ extreme event regions, by enforcing the extreme event statistics of a few observables during training, which can be available or assumed through qualitative arguments or other forms of analysis. This type of statistical regularization results in models that fit the observed data, but also enforces consistency with the prescribed statistics of some observables, enabling the generation of unprecedented extreme events even when the training data lack extremes therein. Theoretical results based on optimal transport offer a rigorous justification and highlight the optimality of the introduced method. Additionally, extensive numerical experiments illustrate the favorable properties of the ηlearning framework on several prototype problems and real-world precipitation downscaling problems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164848</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A*-Decoding: Token-Efficient Inference Scaling</title>
<link>https://hdl.handle.net/1721.1/164846</link>
<description>A*-Decoding: Token-Efficient Inference Scaling
Chatziveroglou, Ioannis
Inference-time scaling has emerged as a powerful alternative to parameter scaling for improving language model performance on complex reasoning tasks. While existing methods have shown strong performance gains under fixed compute budgets, there has been little focus on optimally utilizing that budget during inference. In this work, we introduce A*-decoding, a search-based inference-time strategy that builds on the A* search algorithm to optimally utilize a fixed compute budget by prioritizing high-quality reasoning paths during generation. We frame language model decoding as a structured search in a state space of partial solutions, applying the A* transition model to identify promising continuations guided by an external process supervision signal. In our experiments, A*-decoding reaches the performance levels of strong inference scaling baselines like best-of-N and particle filtering while using up to 3x fewer tokens and 30% fewer PRM passes under equivalent compute budgets. On the MATH500 and AIME 2024 benchmarks, A*-decoding enables Llama-3.2-1B-Instruct to match the performance of the 70x larger Llama-3.1-70B-Instruct, and allows Qwen3-1.7B to reach o1-like reasoning accuracy. These results highlight the power of structured search in decoding, offering an alternative to brute-force sampling or scale-driven gains. Our work demonstrates how thoughtful inference-time strategies can enhance reasoning in SLMs, pointing toward future advances in more efficient and scalable language model deployment.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164846</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics</title>
<link>https://hdl.handle.net/1721.1/164845</link>
<description>U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics
Varma, Vikram
Imaging the structural and functional connections between cells in the brain allows neuroscientists to understand the brain by studying neuronal wiring diagrams. To automatically segment and classify images that are used in the construction of these neuronal wiring diagrams, or connectomes today, machine learning segmentation techniques require an image scanned with an electron microscope at either a slow dwell time or with small pixel sizes. However, a scalable and more rapid implementation of connectome construction has not yet been realized because of the significant cost of multi-beam electron microscopes and the relatively slow time in which connectomes can be constructed using a single-beam electron microscope. Segmented connectomes include sections that can be segmented properly with a fast scanned image as well as sections that require slow scanning for proper segmentation. Due to this fact, a potential way to enhance the time in which connectomes can be produced and segmented is to first scan samples at fast resolution and perform segmentation using a convolutional neural network, identify those areas of interest that require more detailed imaging through a learning-based error detection network, and then rescan only those identified high interest areas to produce a fused image for segmentation. The proposed thesis will analyze various machine learning methods for segmentation using the U-Net network and review proposed enhancements to the U-Net network that can better utilize electron microscopy images for construction of segmented connectomes. The successful use of fused electron microscopy images will potentially enable higher speed and lower cost electron microscopy imaging for connectomics.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164845</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications</title>
<link>https://hdl.handle.net/1721.1/164844</link>
<description>Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications
Zhang, Erin Wei
Waveguide integrated devices that operate in the mid-infrared (mid-IR) wavelength range (2.5-12 µm) are used for sensing the fundamental absorption bands in a variety of molecules. Germanium (Ge) is commonly used for photodetection in the nearinfrared (near-IR) wavelength range of 1.2-1.6 µm due to its strong absorption from a 0.8 eV direct band gap. At longer wavelengths in the mid-IR range, Ge exhibits transparency that makes it a desirable waveguide material for sensing applications. Its epitaxial growth compatibility with silicon (Si) substrates makes Ge-on-Si an effective platform for mid-IR waveguides. For back-end-of-line (BEOL) integration of waveguides in sensing applications, the thermal budget limits the temperature to below 450°C. In this work, we investigated the use of h-line exposure as a commercially viable, low-cost option for patterning low temperature (LT) Ge-on-Si waveguides using direct write lithography. Waveguide dimensions for optimal confinement in single-mode transverse electric (TE) polarization at wavelengths of 3 µm and 10.4- 11.3 µm were modeled and the direct lithography process was refined. Through dose testing and adjustments to the raster direction and pixel resolution, it was found that direct write lithography lacked the resolution required for low-loss waveguides. Scanning electron microscopy (SEM) revealed inconsistent waveguide widths and sidewall roughness, and e-beam lithography was identified as the preferred lithography process. For future integration of LT-Ge in a foundry process design kit (PDK), a universal thickness of 1.7 µm was found to support single-mode waveguide operation from 3-11.3 µm wavelength.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164844</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Log-Based Coordination Systems for Managed Cloud Environments</title>
<link>https://hdl.handle.net/1721.1/164843</link>
<description>Assessing Log-Based Coordination Systems for Managed Cloud Environments
Jimenez, Gabriel
The distributed systems landscape is undergoing a significant shift toward managed cloud environments, reducing the prevalence of self-hosted coordination services such as ZooKeeper. While ZooKeeper remains a proven and feature-rich solution for coordination tasks, its deployment in cloud environments can introduce component redundancy. This is because the underlying cloud platform already provides internal mechanisms to ensure coordination guarantees. This thesis investigates the design and evaluates the performance of a log-based coordination service library tailored for managed cloud environments. The proposed library removes the ensemble management overhead inherent in ZooKeeper by delegating durability and consistency responsibilities to the cloud provider’s data layer. This architectural simplification enables a modular design, allowing for tailored implementations that exploit the strengths and mitigate the limitations of a system's specified data layer. The library demonstrated feature parity with ZooKeeper for a targeted subset of coordination features, including leader election, membership tracking, and ephemeral state management. The same is noted for migration from an existing ZooKeeper-based application to this work's library, requiring minimal design changes while preserving coordination guarantees. While the results show that this design does not yet match mature coordination services in raw performance, they highlight potential avenues for further research, particularly in optimizing log-based coordination systems for the unique characteristics of cloud-managed data layers. Given the industry’s steady movement toward cloud-native infrastructure, these findings provide a foundation for future exploration into lightweight, platform-integrated coordination solutions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164843</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Comprehension, Production, and Reasoning in&#13;
Humans and Neural Language Models</title>
<link>https://hdl.handle.net/1721.1/164842</link>
<description>Language Comprehension, Production, and Reasoning in&#13;
Humans and Neural Language Models
Eisape, Tiwalayo
How closely do neural language models mirror human language processing, and what can this alignment teach us about cognition? This dissertation presents convergent evidence in comprehension, production, and reasoning that neural language models (LMs) can serve as productive instruments for understanding naturalistic human language use at scale. Studies 1-2 examine comprehension with complementary methods. First, Cloze Distillation—a novel method for aligning models with human next-word predictions—improves both language modeling and reading time prediction, demonstrating that LMs and humans make distinct, complementary predictions. Second, new methods for identifying syntactic information in LM hidden states demonstrate that models learn to implicitly represent incremental syntactic state. These probes also enable targeted interventions, allowing us to manipulate representations to resolve (or induce) temporary misinterpretations, confirming mechanistic understanding. While these studies demonstrate prediction’s role in comprehension, a complete account requires examining whether these mechanisms also shape how humans produce language in real-time. Study 3 analyzes a massive corpus of 2.3 million competitive typing events from TypeRacer.com, uncovering the first evidence of in-context predictability effects in this domain of production. Finally, Study 4 compares human and LM reasoning systematically—LMs achieve higher syllogistic reasoning accuracy than humans while still replicating several fine-grained human-like error patterns that are orthogonal to logical accuracy, including premise ordering effects. These converging findings reveal prediction as a fundamental mechanism in comprehension, production, and reasoning in both humans and LMs. While models achieve this through statistical learning rather than specialized cognitive architecture—often outperforming humans yet replicating their systematic biases—this alignment supports predictive processing theories of cognition. This work establishes LMs as scalable cognitive laboratories that can complement traditional experiments, and contributes psycholinguistically principled methods for understanding and controlling LMs.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164842</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks</title>
<link>https://hdl.handle.net/1721.1/164841</link>
<description>Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks
Echezona, Chukwuemekalum
As the Internet continues to grow in size and complexity, Distributed Denial of Service (DDoS) attacks grow in size and complexity alongside it. One particularly common form of DDoS attack is the TCP SYN flood, which exploits the TCP handshake process to exhaust server resources. This thesis investigates the use of a novel proof-of-work (PoW) based mitigation method to respond to such attacks, specifically in the context of WebRTC video conferencing applications. PoW aims to shift the computational burden from the server to the client, by utilizing a hard to solve puzzle that is easily verifiable. Guided by the same evaluation framework used by the original contributors, we conducted controlled experiments using SPHERE, a national research testbed, and the open-source Jitsi Meet video conference application to simulate DDoS attacks and measure their impact on video quality metrics such as upload/download bitrate and video framerate. Our experiments involved multiple scenarios with and without active attacks and PoW mitigation activate. Results demonstrate that PoW imposes minimal overhead on legitimate clients while maintaining high efficacy when faced with the threat of a SYN Flood attack, regardless of whether the attackers do the proof-of-work before sending traffic. These findings highlight PoW as a promising low overhead mitigation method for WebRTC conference systems under the threat of DDoS attacks.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164841</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding</title>
<link>https://hdl.handle.net/1721.1/164840</link>
<description>Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding
Huang, Natalie
The lifelong Multi-Agent Path Finding (MAPF) problem requires planning collision-free trajectories for agents operating continuously in dynamic environments. Traditional solvers such as Priority-Based Search (PBS) use fixed branching heuristics, which can be inefficient in high-congestion scenarios. This work explores how learning-based methods can improve PBS decision-making. We develop supervised learning (SL) policies trained from high-quality beam search trajectories and reinforcement learning (RL) policies learned directly through simulation, enabling adaptive branching strategies. Evaluations on warehouse-style and Kiva-style maps with varying agent densities show that learned policies can significantly boost throughput in congested warehouse layouts, while identifying scenarios where classical heuristics remain competitive. Our findings provide guidance on solver selection based on environment layout and congestion characteristics.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164840</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Interpret Language Model Diffs</title>
<link>https://hdl.handle.net/1721.1/164839</link>
<description>Learning to Interpret Language Model Diffs
Goel, Avichal
Finetuning-induced changes to a model’s weights (a “model diff”) are semantically meaningful but often difficult to interpret. This makes us wonder: can we describe the content of an unknown model diff using natural language? We introduce diff interpretation training, a method that teaches a model describe its own finetuning-induced modifications. Our approach uses synthetic model diffs to train a lightweight adapter, which in turn can be applied to a compatible finetuned model to make it self-describing. Using two simple task settings, we demonstrate that our method can successfully decode model diffs into accurate natural language descriptions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164839</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Product architectures for solar-powered drip irrigation (SPDI) systems in the Middle East and North Africa</title>
<link>https://hdl.handle.net/1721.1/164838</link>
<description>Product architectures for solar-powered drip irrigation (SPDI) systems in the Middle East and North Africa
Grant, Fiona R.
To feed the growing global population, agriculture production must be intensified using existing land and resources. Sustainable agriculture intensification is particularly important in the Middle East and North Africa (MENA), the most water-stressed region in the world. Solar-powered drip irrigation (SPDI) has the potential to increase water use efficiency and reduce fossil fuel use for irrigation. Despite these benefits, SPDI adoption is limited by its high investment cost and the misalignment between farmers' risk tolerance and broader sustainability goals. Past work has explored three areas of SPDI innovation: low-pressure drip emitters, system cost optimization, and precision irrigation control. This thesis integrates previous innovations in an end-to-end design process to generate SPDI architectures that are accessible to resource-constrained farmers.&#13;
A market study was conducted to understand farmers' priorities and constraints and articulate SPDI value propositions for the target users. Stakeholder surveys were conducted in Jordan and Morocco for farms ranging from 1–130 hectares. Three market segments were identified, grouping farmers who face similar economic and knowledge barriers. While farmers generally prioritized irrigation reliability and low system costs, the observed variety in farm size, production volume, and technical expertise suggested that SPDI architectures must be tailored to each market segment.&#13;
This thesis proposes an energetic framework that captures system parametric relationships to identify feasible SPDI design trade-offs. The optimized solar power systems were 14%–80% less expensive than conventionally-sized designs. Despite significant changes to the hydraulic operating parameters, the proposed SPDI architectures were as reliable as existing systems. For farms with long irrigation times, it was optimal to pair low-pressure drip emitters with an irrigation schedule that tracks the daily solar profile, termed “solar profile matching” (SPM), to maximize direct solar power use. The SPM schedule reduced system cost by minimizing the battery capacity. An economic analysis demonstrated that the optimal SPDI designs could be made cost-competitive with grid power through SPDI retrofit subsidies, which some local governments already support. Researchers and industry professionals could use the energetic framework and techno-economic analysis presented in this thesis to inform system design and policy decisions and promote SPDI adoption.&#13;
Finally, this work created guidelines for designing a precision irrigation controller in resource-constrained markets. A controller was conceptualized to implement the SPDI-SPM architecture. The controller functional requirements and design specifications were iteratively defined with stakeholders, and a prototype was tested on two farms in the MENA region. The controller reduced water and energy use by up to 44% and 43%, respectively, while maintaining crop yield. However, the controller relied on battery power to execute the irrigation schedule. A yield loss sensitivity analysis found that using 72%–79% of the available solar energy on average, an increase of about 40% from the experiment SPM schedules, would have been sufficient to reliably irrigate with solar alone. The results suggest that, with software modifications, the proposed controller could eliminate the need for a battery and enable low-cost SPDI systems. If adopted, the proposed controller could make sustainable irrigation practices more accessible to farmers.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164838</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods</title>
<link>https://hdl.handle.net/1721.1/164837</link>
<description>Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods
Botto Tornielli, Marcos Julian
With the substantial computing resources available today, computational fluid dynamics simulations allow scientists and engineers to simulate physical problems very accurately. However, achieving this accuracy requires a sufficiently refined computational mesh, which is a primary driver for the high cost of complex simulations. Mesh adaptation methods provide an automated way to determine the regions where a mesh needs the most refinement and generate a new mesh that efficiently targets these regions. In this thesis, we build on previous work in a posteriori error estimation and mesh adaptation for finite element methods to propose a new mesh adaptation method based on L² error control by solution post-processing. A key feature of our method is its natural extension to higher-order discretizations while providing a problem-independent adaptation methodology. Problem-independent adaptation methods do not depend on specific information about the partial differential equation (PDE) problem being solved, and can therefore be applied to a wide range of problems without modification. We present numerical results applying the approximate L² error control method to a two-dimensional advection-diffusion problem with anisotropic features. These results demonstrate the proposed method’s ability to generate well-adapted anisotropic meshes for solutions with polynomial orders 1, 2, and 3. We also apply the approximate L² error control method to a more complex two-dimensional Reynolds-Averaged Navier-Stokes problem with turbulent flow over a flat plate. We compare the convergence of the drag coefficient and the characteristics of adapted meshes obtained with the proposed method and with an output-based adaptation approach. As expected, the approximate L² error control method is not as effective as the output-based approach in reaching a converged drag coefficient value, but it nevertheless demonstrates the ability to effectively control the approximate L² error in the Mach field.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164837</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors</title>
<link>https://hdl.handle.net/1721.1/164836</link>
<description>Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors
Murphy, Devin
Resistive matrix-based tactile sensors offer a scalable and intuitive approach to capturing human-environment interactions, yet deploying them in real-world systems remains challenging because they must remain portable, adaptive, and long-lasting. This thesis presents the WiReSens Toolkit, an open-source hardware and software platform for developing resistive tactile sensing systems that meet the demands of real world applications. The toolkit features adaptive hardware for interfacing with resistive sensors and a web-based GUI that mediates access to otherwise complex functionality, including 1) multi-device programming and wireless visualization across three distinct communication protocols 2) autocalibration methods for adaptive sensitivity and 3) intermittent data transmission for low-power operation. As a use case for the toolkit, the thesis then introduces a method for the automatic design and fabrication of custom tactile sensing gloves using flexible printed circuit boards (FPCBs), enabling rapid, scalable production. Together, these contributions lower barriers to adoption and support broader exploration of tactile sensing in HCI, robotics, and ubiquitous computing.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164836</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in Geometric Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164835</link>
<description>Topics in Geometric Machine Learning
Tahmasebi, Behrooz
Recent advances and the widespread adoption of neural networks have revolutionized machine learning and artificial intelligence. These developments demand learning paradigms capable of processing data from diverse applications and sources. In structured domains such as molecules, graphs, sets, and 3D objects, as well as fields such as drug discovery, materials science, and astronomy, models must account for data structures. The emerging field of geometric machine learning has gained attention for enabling neural networks to handle geometric structures, unlocking novel solutions across scientific disciplines. Despite recent advances, theoretical gaps remain. This thesis aims to address these gaps by studying the benefits and limitations of leveraging geometric structures and symmetries in data. We explore sample complexity, generalization bounds, hypothesis testing for the presence of symmetries in data, time complexity of learning under symmetries, and regularization and optimization in symmetric settings. The goal is to build a robust theoretical framework that validates recent successes and sheds light on unexplored aspects, fostering future progress in geometric machine learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164835</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantization Methods for Matrix Multiplication and Efficient Transformers</title>
<link>https://hdl.handle.net/1721.1/164834</link>
<description>Quantization Methods for Matrix Multiplication and Efficient Transformers
Savkin, Semyon
We study quantization in Machine Learning. First, we introduce NestQuant — a technique for quantization of matrix products and post-training quantization of LLMs. Beyond reducing the memory footprint, quantization accelerates inference, as the primary bottleneck during autoregressive generation is often the memory bandwidth. NestQuant leverages two nested lattices to construct an efficient vector codebook for quantization, along with practical encoding and decoding algorithms. The approach is grounded in recent theoretical work that characterizes the optimal rate–distortion trade-off for matrix products. Empirically, on Llama-3-8B, it reduces the perplexity gap between full-precision and quantized models by more than 55% relative to the current state-of-the-art technique (SpinQuant). Second, we investigate data-domain quantization for RF signals. We propose a tokenized transformer for source separation that discretizes RF waveforms into learned tokens and operates directly on the resulting sequences, outperforming strong convolutional baselines. Together, these contributions connect information-theoretic limits with deployable systems: structured vector quantizers accelerate LLM inference and enable competitive discrete representations for RF tasks.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164834</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors</title>
<link>https://hdl.handle.net/1721.1/164833</link>
<description>Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors
Chun, Ethan
Barometric tactile sensors offer a cheap, robust, and customizable means for robots to perceive the world. Central to their operation are models that extract useful information from the sensors’ raw pressure readings. In this work, I focus on improving data-driven methods for single-point contact localization and force estimation using a previously presented three-quarter sphere barometric tactile sensor. To allow modeling of time-dependent effects in the sensor material, I introduce a multi-threaded data collection system that captures ground truth contact and sensor data at exactly 100 Hz. I construct both feed-forward and recurrent networks using this data, finding that a recurrent network achieves a 15% lower mean absolute error for angular contact localization on the sphere compared to prior methods. The recurrent architecture’s computational efficiency ensures that the architecture can still run within the constraints of the sensors’ microcontroller. Despite this improvement, I find that more expressive models such as LSTMs tend to overfit on the collected data and physical phenomena observed during deployment were not well represented by the training metrics. To better understand the extent that these data-driven methods alone can improve sensor performance, I shift focus away from the modeling and analyze the physical sensor instead. I find that viscous effects in the sensor can render the prediction task unlearnable without historical data and that thermal effects introduce a train-test distribution shift. Finally, I discuss design criteria for a theoretical future barometric tactile sensor that may mitigate the effects found during my modeling and analysis.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164833</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets</title>
<link>https://hdl.handle.net/1721.1/164832</link>
<description>Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets
Rojas Collins, Elias G.
Modern probabilistic programming applications, from large-scale Bayesian inference to real-time decision making, require both the expressiveness of CPU-oriented languages such as Gen.jl and the massive parallelism of GPU-backed array languages such as GenJAX, yet existing platforms force users to trade modeling flexibility for performance. This thesis introduces GenUflect, a metalanguage that embeds multiple Gen-compatible dialects inside a single program, allowing each sub-component to run on the most appropriate language and hardware target while preserving Gen’s programmable-inference interface. GenUflect extends Gen’s dynamic-modeling language with the @union, @vmap, @amortize, @amortize≤, and @runtime_union combinators; these macros compile at build-time (or justin-time) to autonomous generative functions written in the target dialect, link them through a lightweight FFI layer, and manage cross-device data via zero-copy MirrorArrays and lazily materialized traces. The resulting programs remain sound by construction because each foreign subtrace is itself a valid Gen generative function. Empirical studies demonstrate that this hybrid approach yields large practical gains. On a split linear-vs-sinusoidal regression task, GenUflect matches pure GenJAX throughput while running higher-order control logic on the CPU, and is up to two orders of magnitude faster than a pure Gen implementation for datasets of 105 points. In a collapsed-Gibbs sampler for a Dirichlet-process mixture model, GenUflect’s elastic allocation (@amortize≤) lets vectorized GPU kernels adapt to a growing number of clusters; the same inference that takes over an hour in Gen executes in seconds with GenUflect. A probabilistic inverse-graphics pipeline further showcases how heterogeneous submodels can cooperate seamlessly within unified inference code. By coupling language interoperability with automated data movement and compile-time code generation, GenUflect bridges the gap between flexibility and speed, enabling scalable, expressive probabilistic programs that natively exploit both CPUs and accelerators.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164832</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Under-Coverage of Double Machine Learning Due to Implementation Choices</title>
<link>https://hdl.handle.net/1721.1/164831</link>
<description>Under-Coverage of Double Machine Learning Due to Implementation Choices
Siegmann, Charlotte B.
Double ML estimators can estimate coefficients of interest with far fewer functional form assumptions than linear econometric methods. However, DML requires researchers to make a range of implementation choices, including the selection of the function class, the random seed, and hyperparameter configurations. While asymptotic theory suggests these choices should not affect final estimates, we show that for 10 economic analyses (8 of them published and peer-reviewed), implementation choices affect the results. In half of the datasets, different implementation choices even change the interpretation of findings between negative, null, or positive effects. We link these results to a framework for empirically assessing the performance of machine-learning-based estimators, focusing on precision, coverage, and susceptibility to manipulation. This is meant to complement asymptotic theory. We demonstrate that the coverage of DML confidence intervals is too low—placing an upper bound of 48% on the expected coverage of conventional 95% confidence intervals for published DML economics papers. We show that in the status quo, the susceptibility of DML to manipulation by researchers is high, but propose ways to mitigate this susceptibility.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164831</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm</title>
<link>https://hdl.handle.net/1721.1/164830</link>
<description>Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm
Zhu, Qianyu Julie
A central task in Bayesian inference and scientific computing is to compute expectations with respect to probability distributions that are only known up to a normalizing constant. Markov chain Monte Carlo (MCMC) methods, and in particular Langevin dynamics, provide a powerful framework for this task by constructing stochastic processes that converge to the target distribution. However, practical implementations face two challenges: slow mixing when the target distribution is anisotropic or multimodal, and persistent discretization bias introduced by numerical schemes. This thesis investigates irreversible perturbations of overdamped Langevin dynamics, aiming to accelerate mixing while controlling discretization error. Irreversible perturbations introduce skew-symmetric drift terms that preserve the target distribution while inducing rotational flow, thereby enhancing exploration. Although prior work has established their benefits in continuous-time settings, the impact of discretization and the design of optimal perturbations for discrete-time algorithms remain open problems. We develop a framework for optimizing constant (position-independent) irreversible perturbations in the Unadjusted Langevin Algorithm (ULA). Our approach balances two competing objectives: maximizing the spectral gap of the continuous dynamics to accelerate convergence, and minimizing discretization error that drives estimation bias. Motivated by this, we introduce new criteria that jointly evaluate bias and efficiency, and we show how these criteria identify perturbations that improve performance beyond existing constructions. Theoretical analysis is complemented by numerical experiments on Gaussian and nonGaussian targets. These experiments demonstrate that appropriately designed irreversible perturbations can reduce mean-squared error without sacrificing stability, while poorly chosen perturbations can degrade performance. The results highlight the importance of geometry-aware design and motivate systematic optimization strategies for irreversible perturbations. Overall, this work extends the theoretical and practical understanding of irreversible Langevin dynamics, bridging the gap between continuous-time spectral analysis and discrete-time numerical performance. It provides principled tools for constructing efficient MCMC samplers, with potential applications in high-dimensional Bayesian inference and modern machine learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164830</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single Camera Motion Compensated Viewpoint Shift</title>
<link>https://hdl.handle.net/1721.1/164829</link>
<description>Single Camera Motion Compensated Viewpoint Shift
Snowdon, Adam
Eye contact is a necessary tool for human connection and in most video conferencing situations, eye contact is not possible. Standard laptop and webcam configurations position the camera at the top of the screen, meaning that when the user looks at other people’s faces in the center of the screen, the camera captures the user looking downward, creating the impression of poor eye contact for remote participants. Solutions involving 3D modeling of the face to synthesize a gaze-corrected view have been explored and exist but are too computationally costly for most personal computers. To address this computational challenge, we draw inspiration from 2D frame interpolation techniques to synthesize a virtual camera view that repositions the user’s apparent gaze toward the camera. Our method uses a single camera located at the top of the user’s screen and requires only a brief setup period. Assuming there is only one user, our approach creates a virtual camera view that transforms the user’s viewpoint from the screen center to the camera position, enabling more realistic eye contact in video conference calls.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164829</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data</title>
<link>https://hdl.handle.net/1721.1/164828</link>
<description>A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data
Pan, Jessica N.
Mapping the brain’s complex neural networks requires tracing the long-distance pathways of individual axons, a task that demands a comprehensive 3D reconstruction of the brain. Recently, spatially resolved transcriptomics (SRT) methods enable the study of gene expression and biomolecule distribution in each neuron in its spatial context, opening the door to more thoroughly investigating cell-cell interactions between neurons. However, SRT methods are limited to slices of tissue; therefore, computational alignment is essential to reconstruct a cohesive 3D volume while correcting for both batch effects and inherent sample variability. This thesis presents a novel framework that addresses these challenges through three primary contributions. First, a memory-efficient, non-referenced-based algorithm was developed to align the superficial surfaces of adjacent, high-resolution tissue slices. Second, these surface transformations were interpolated through the tissue slices on a proof-of-concept dataset of three adjacent slices. Third, methods for co-transforming fluorescent protein imaging data were explored to fully resolve the cell boundaries between neurons. These three methods are necessary steps towards creating a fully-resolved, multimodal 3D model of the brain.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164828</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators</title>
<link>https://hdl.handle.net/1721.1/164827</link>
<description>Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators
Garg, Shruti
Non-convex optimization is essential to tackle increasingly complex and practical problems in kinematic motion planning. Although introducing non-convexity often sacrifices guarantees of feasibility and optimality–making solutions more susceptible to local minima or failure to converge–many robotic systems and tasks are non-convex by nature, necessitating at least somewhat non-convex formulations. In this thesis, we aim to mostly constrain non-convexity to the objective. This optimization structure helps preserve certain feasibility guarantees in theory and usability in practice while enhancing optimality of solutions, even if global optimality is not achieved. In the first chapter, we demonstrate the effectiveness of non-convex objectives in scenarios where motion planning involves a non-convex parameterization of the configuration space. We keep constraints strictly convex, with the non-convexity quarantined to the objective. This structure guarantees a feasible solution given a feasible initial guess. We primarily use our method to post-process Graphs of Convex Sets solutions in three domains: constrained bimanual motion, motion with guaranteed non-collision, and planning in SO(3). In each case, the non-convex objective compensates for distortion introduced by the parameterization, resulting in more efficient and natural motion. In the second chapter, we propose teleoperation scheme with full-body motion planning for non-holonomic mobile manipulators. Our key contribution is a Differential Inverse Kinematics (DiffIK) formulation that crafts non-convex objectives to avoid singularities and joint limits leading to more robust feasible motion. Unlike before, the constraints are not strictly convex, so the optimization has no guarantees of feasibility. However, we mitigate the non-convexity in the constraints as much as we can by linearizing around the robot’s current position and approximating the highly non-convex non-holonomic constraint. We explore multiple formulations for singularity avoidance and empirically demonstrate that integrating these objectives into DiffIK improves motion quality for teleoperation for the RBY-1 robot.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164827</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation</title>
<link>https://hdl.handle.net/1721.1/164826</link>
<description>CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation
Pai, Sameer
A key challenge in the robotic manipulation of deformable objects is the lack of accurate and efficient systems for estimating their pose in real-time, especially in the presence of occlusion. In this thesis we propose CableSplat, a novel non-parametric method leveraging 3D Gaussian Splatting to estimate the pose of a linear deformable object given RGB images of the object from multiple viewpoints. To facilitate the evaluation of the performance of this method, we develop both simulated and real-world pipelines to collect calibrated and segmented recordings of cables undergoing various manipulations and transformations. We find that our method is consistently able to estimate cable pose to within an average error of ∼2.5mm across simulated tasks. Furthermore, performance on a scene reconstruction metric drops only slightly between simulated and real-world data, suggesting high-fidelity state estimation even in the real world. CableSplat is therefore a promising candidate for the extension of existing manipulation systems to deformables.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164826</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease</title>
<link>https://hdl.handle.net/1721.1/164825</link>
<description>scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease
Guo, Sophie J.
Advances in artificial intelligence (AI) and generative AI for representation learning have transformed our ability to model complex biological systems. Single-cell RNA sequencing (scRNA-seq) provides unprecedented resolution into cellular heterogeneity, offering a powerful substrate for modeling disease circuitry. However, predicting patient-level phenotypes from scRNA-seq remains challenging due to limited sample sizes, variable cell counts, and the computational burden of modeling long-context dependencies. We present scPhen, a flexible, parametric deep-learning framework for phenotype prediction from single-cell transcriptomic data, applied here to Alzheimer’s disease (AD) as a paradigm of complex, heterogeneous pathology. scPhen consists of a cell embedding module and a patient embedding module, designed to capture both fine-grained molecular patterns and higher-order cell–cell relationships. The framework supports multiple architectural backbones, including Transformers, Graph Neural Networks (GNNs), and state-space models such as Mamba, Mamba2, and BiMamba2, allowing exploration of tunable components for optimized performance. Across classification and regression tasks, state-space models, and in particular BiMamba2, demonstrated superior predictive accuracy and computational efficiency compared to Transformer-based and hybrid approaches. We further integrated attention-based multiple instance learning to enable variable cell counts per patient and to prioritize phenotype-informative cellular subsets. Interpretability analyses using Integrated Gradients and cell-level attention scores revealed gene programs and cell populations associated with AD progression, highlighting known neuroinflammatory signatures and suggesting novel molecular targets. By unifying cutting-edge sequence modeling architectures with scalable single-cell analysis, scPhen provides a generalizable, high-resolution approach to phenotype prediction. While demonstrated here in AD, this framework is readily extensible to other complex diseases and multi-modal cellular datasets, bridging computational innovation and biological discovery.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164825</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Task Functional Localizers Using Naturalistic fMRI</title>
<link>https://hdl.handle.net/1721.1/164824</link>
<description>Predicting Task Functional Localizers Using Naturalistic fMRI
Wilke, Jordan
Functional magnetic resonance imaging (fMRI) data collected during naturalistic stimuli has shown promise for predicting individual traits, biomarkers of disease and functional brain localizations, potentially offering advantages over traditional resting-state approaches. This study investigated the use of interpretable deep learning models to predict demographics and functional task localizer activations from fMRI time-series data collected while participants viewed naturalistic stimuli. Using the data of 143 subjects from the Human Connectome Project, I analyzed 7T fMRI scans from participants watching movies to predict sex, age, and functional localizer activations across multiple cognitive tasks. I employed state-of-the-art machine learning architectures, including DICE and Glacier models, specifically chosen for their interpretable design features that build directed connectivity matrices and produce weighted temporal attention maps. These models aimed to capture dynamic brain activity patterns while maintaining the ability to understand which temporal features drive predictions. The results successfully reproduced previous findings for sex classification but showed poor performance for age prediction, with correlations ranging from -0.175 to 0.243. For functional localizer predictions, models initially appeared to achieve high performance with some specific contrasts having correlations around 0.9 and Dice scores generally above 0.6. However, detailed analysis revealed that these models were primarily predicting group averages rather than learning meaningful inter-subject variability, as evidenced by chance-level subject identification accuracy. This finding contrasts with previous works that demonstrated successful prediction of individual differences in functional localizations. The failure to capture inter-subject variability represents a significant limitation, as individual differences in functional regions of interest are crucial for applications such as pre-surgical mapping and disease prediction. My findings suggest that predicting from raw fMRI time-series may require different approaches than those used here, with preprocessed functional connectivity matrices showing promising results, and highlight the importance of sufficient training data to separate signal from noise when learning directly from naturalistic stimuli. Despite these challenges, this work establishes important methodological foundations and identifies key limitations that must be addressed in future research combining naturalistic stimuli with machine learning for fMRI prediction tasks. The findings emphasize the need for models that can capture individual functional differences while maintaining the interpretability necessary for understanding how naturalistic stimuli drive brain-based predictions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164824</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference</title>
<link>https://hdl.handle.net/1721.1/164823</link>
<description>Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference
Chung, Karen
GPU-compatible probabilistic programming languages (PPLs) have enabled high-performance, data-parallel programmable inference. However, these systems face fundamental trade-offs between expressiveness and performance, as their GPU code generation is automated and black-boxed, limiting optimization opportunities and imposing restrictions on program expressivity. This thesis introduces GenCUDA, a probabilistic programming system that addresses this limitation by embedding the CUDA GPU programming language directly into a C++/CUDA frontend, enabling GPU programmable inference with fine-grained control over runtime and memory profiles. GenCUDA extends the Gen probabilistic programming architecture by providing a dynamic modeling language (DML) that allows users to write performance-critical sections of generative functions as CUDA kernels while maintaining automatic trace management and the generative function interface (GFI). The system supports both sequential and parallel execution contexts through specialized effect handlers that seamlessly compose CPU and GPU code paths. Key technical contributions include: (1) a high-performance GPU distributions library achieving 10-100× speedups over TensorFlow-Probability, (2) memory-efficient trace management via template-optimized parallel effect handlers, and (3) vectorized generative functions that enable massive parallelization of inference algorithms. We demonstrate GenCUDA’s capabilities through comprehensive benchmarks on inference algorithms applied to diverse models including factor graphs, mixture models, and Hidden Markov Models. Results show significant performance improvements over JAX-based implementations: up to 3× speedup for importance sampling on a hierarchical model, 5.7× speedup for parallel Gibbs sampling on factor graphs, and memory efficiency improvements for large-scale mixture models supporting up to 6× as many clusters compared to existing frameworks’ limits. The system maintains the composability and expressiveness of probabilistic programming while unlocking GPU performance optimization techniques such as kernel fusion and memory hierarchy exploitation that are inaccessible to higher-level frameworks. GenCUDA demonstrates that embedding low-level GPU programming within automated probabilistic inference workflows can achieve both performance gains and algorithmic expressivity without sacrificing the modularity of probabilistic programming paradigms.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164823</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming</title>
<link>https://hdl.handle.net/1721.1/164822</link>
<description>Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming
Kotak, Mit
E(3)-equivariant neural networks have demonstrated success across a wide range of 3D modeling tasks. Until recently, they were bottlenecked due to their high memory and wall-time requirements. In this thesis we first provide an overview of recent GPU kernel efforts by both academia and industry that address this issue. These approaches tradeoff performance for engineering complexity, while still being algorithmically bottlenecked at 10 % GPU utilization. We instead trade off engineering complexity for performance. This not only lowers the barrier to GPU programming but also builds an abstraction layer to reason about future algorithmic innovations that can improve GPU utilization. Our kernel &#119861;3, based on the tiling- optimizations in just 100 lines of PyTorch-like code. We explore the performance-simplicity tradeoff with two case studies and demonstrate the practicality of our kernel workflow through downstream integration with a production model. We hope this work serves as inspiration to broaden and deepen existing equivariant kernel efforts.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164822</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical exposures in drinking water: contaminant analysis and physicochemical behavior</title>
<link>https://hdl.handle.net/1721.1/164821</link>
<description>Chemical exposures in drinking water: contaminant analysis and physicochemical behavior
Bugher, Nicolette A.
Environmental chemical exposures pose an understudied risk to human health. The quality and accessibility of data on occurrence in the environment and physicochemical behavior of industrial chemicals are integral for accurate exposure risk assessment. In this dissertation, analytical chemistry techniques were developed and leveraged to characterize human exposures to contaminants in drinking water and improve methods for assessing such risks. The occurrence of organic industrial pollutants in domestic well waters was investigated, with a particular focus on the impacts of region-specific industrial activity (e.g., hydraulic fracturing), legacy pollution sites (e.g., Superfund sites), and geochemistry. The exposure risk to water contaminants of domestic well users was further interrogated by evaluating trends in contaminant concentrations resulting from the implementation and maintenance of in-home water treatment devices. The results show widespread, low-dose mixtures of organic pollutants, where the efficacy of removal by in-home water treatment varied by water contaminant class and maintenance frequency. Additionally, analytical methods were optimized to quantify a group of organic water contaminants (i.e., probable carcinogens, N-nitrosamines), improving method sensitivity and critically identifying false-positive interferences. Finally, methods were evaluated and deployed for the determination of physicochemical properties of N-nitrosamines. The results of which demonstrate gaps in existing experimental data, provide a valuable methodological intercomparison (two experimental and two computational approaches), and contribute novel partitioning data. This dissertation addresses gaps in occurrence data, analytical method sensitivity, and reliability of physicochemical parameters for risk assessment. The combination of method development and implementation enables the study of exposures to water contaminant mixtures at health-relevant concentrations, representative of prevalent exposure pathways.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164821</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From coarse fate choice to precise pattern: post-mitotic progenitor targeting</title>
<link>https://hdl.handle.net/1721.1/164820</link>
<description>From coarse fate choice to precise pattern: post-mitotic progenitor targeting
Nie, Mel F.
Planarians possess remarkable regenerative abilities, driven by pluripotent stem cells called neoblasts. While neoblasts are known to give rise to progenitor cells that form various tissues, whether and the extent to which these progenitors migrate across the animal remains unclear. Irradiation experiments eliminate all neoblasts outside shielded areas, allowing for the visualization of cell migration from the remaining neoblasts, but irradiated animals may not reflect homeostatic progenitor migration patterns. To address this, 5-ethynyl-2’-deoxyuridine (EdU) labeling and plug transplant techniques were used to trace progenitor movement in non-irradiated planarians. Using whole-mount fluorescence in situ hybridization (FISH) and the quantification of EdU-labeled cells, this study demonstrates that progenitor cells are capable of migrating long distances and exhibit a pronounced anterior bias in their movement and integration.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164820</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Data Layouts for Evolving Cloud Table Storage</title>
<link>https://hdl.handle.net/1721.1/164819</link>
<description>Optimizing Data Layouts for Evolving Cloud Table Storage
Sudhir, Sivaprasad
Modern data analytics platforms increasingly adopt disaggregated architectures, storing data in cost-effective cloud object stores. While this approach enables a clean separation of concerns, allowing each layer to be independently managed and scaled, it introduces significant performance bottlenecks due to expensive data movement. Effective data layouts, which organize data to minimize unnecessary data reads, are thus critical to achieving high query performance. However, existing techniques typically rely on manually specified layouts, collect limited metadata, or lack mechanisms to dynamically adapt to changing data and workloads.&#13;
&#13;
This thesis investigates adaptive, metadata-rich, expressive data layouts for cloud table storage. First, we introduce Pando, a correlation-aware layout technique that leverages rich metadata on query predicates to significantly improve data skipping. Next, we propose CopyRight, a partial replication strategy that selectively replicates subsets of data and optimizes each replica differently, efficiently serving heterogeneous query patterns. Finally, we describe Self-Organizing Data Containers (SDCs), a practical table storage layer for the cloud that incrementally reorganizes complex data layouts based on changes in data and workload distributions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164819</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection</title>
<link>https://hdl.handle.net/1721.1/164818</link>
<description>Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection
Wagh, Rohan M.
The growing accessibility of generative models has enabled the rapid proliferation of deepfake content, posing significant challenges in image-based biometric security and media authenticity. In this thesis, six diverse facial deepfake image datasets are assembled, and four modern detection models are evaluated in a cross-domain scenario. We observe that individual models fail to generalize to images generated by techniques outside the scope of their training data. This often hinders the applicability of a single model in real-world deepfake detection. This thesis proposes ensemble strategies as a means of addressing this lack of generalization. We find that the ensemble models outperform individual models in classifying deepfake images, particularly in terms of accuracy and recall. An exhaustive evaluation of combinations of models shows that ensembles of similar models provide limited benefit, whereas ensembles of complementary models lead to significant improvements in classification performance. Ensembling models based specifically on accuracy and recall metrics also produces models that lower the rate of more harmful false negative predictions. This work highlights the value of ensemble models in improving generalization across diverse image families and provides a framework for building robustness in real-world deepfake detection systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164818</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of settlements on the stresses in building frames</title>
<link>https://hdl.handle.net/1721.1/164706</link>
<description>The effect of settlements on the stresses in building frames
Granberg, Robert J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Building Engineering and Construction, 1935; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1935 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164706</guid>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Irradiation grafting of styrene onto dacron fibers and films</title>
<link>https://hdl.handle.net/1721.1/164705</link>
<description>Irradiation grafting of styrene onto dacron fibers and films
Schnetzer, L. J.; Hendren, J. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1959; Includes bibliographical references (leaves 43-44).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164705</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of sound transmission irregularity in a one dimensional enclosure</title>
<link>https://hdl.handle.net/1721.1/164704</link>
<description>An investigation of sound transmission irregularity in a one dimensional enclosure
Foster, Isaac C.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1949
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164704</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deposition and characterization of very low pressure CVD silicon/silicon-germanium heteroepitaxial structures</title>
<link>https://hdl.handle.net/1721.1/164703</link>
<description>Deposition and characterization of very low pressure CVD silicon/silicon-germanium heteroepitaxial structures
Tsai, Curtis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Includes bibliographical references (leaves 135-146).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164703</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An experimental study of the law of parity conservation in electromagnetic interactions.</title>
<link>https://hdl.handle.net/1721.1/164702</link>
<description>An experimental study of the law of parity conservation in electromagnetic interactions.
Hegblom, Edwin Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164702</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of a dynamic sales call policy model.</title>
<link>https://hdl.handle.net/1721.1/164701</link>
<description>Analysis of a dynamic sales call policy model.
Karash, Richard Ivan.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1968; Bibliography: leaf 97.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164701</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests</title>
<link>https://hdl.handle.net/1721.1/164700</link>
<description>MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests
Tan, Lip-Bu.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164700</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative tests of the Boston Elevated Co's surface cars</title>
<link>https://hdl.handle.net/1721.1/164699</link>
<description>Comparative tests of the Boston Elevated Co's surface cars
Jones, Philip C.; Katsainos, Nicholas M.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1912
</description>
<pubDate>Mon, 01 Jan 1912 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164699</guid>
<dc:date>1912-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rules for ring closure and aspects of organolithium chemistry</title>
<link>https://hdl.handle.net/1721.1/164698</link>
<description>Rules for ring closure and aspects of organolithium chemistry
Dupont, William Alan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1980; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164698</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dracut nickel ore ; Geology and concentration, ore no. 2592</title>
<link>https://hdl.handle.net/1721.1/164697</link>
<description>Dracut nickel ore ; Geology and concentration, ore no. 2592
Burton, Eugene.; Spalding, William Livingston.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1905
</description>
<pubDate>Sun, 01 Jan 1905 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164697</guid>
<dc:date>1905-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation</title>
<link>https://hdl.handle.net/1721.1/164696</link>
<description>Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation
Smith, Mathew D.
            (Mathew Darin)
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1997; Includes bibliographical references (leaves 43-45).
</description>
<pubDate>Wed, 01 Jan 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164696</guid>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The crystallization of sucrose</title>
<link>https://hdl.handle.net/1721.1/164695</link>
<description>The crystallization of sucrose
Brown, Ernest K.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1929; Includes bibliographical references (leaf 81).
</description>
<pubDate>Tue, 01 Jan 1929 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164695</guid>
<dc:date>1929-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems of Visualization for Musical Futures</title>
<link>https://hdl.handle.net/1721.1/164673</link>
<description>Systems of Visualization for Musical Futures
Naseck, Perry
This thesis investigates how large-scale visual systems can communicate the presence, agency, and foresight of improvising musical agents–human and AI–during live performance. We propose a framework for manifesting AI collaborators on stage through five principles: musical transparency, live improvisational reactivity, demonstrated virtuosity, communication for collaboration, and visual fit. Two public performances operationalize these ideas: an addressable-light sculpture that renders harmonic space, and a stage-sized kinetic sculpture built from novel, low-cost Generic Pan Tilt fixtures that visualize the AI’s planned “musical futures.” The latter combines a real-time, MIDI-conditioned, Transformer-based hand-motion model with deterministic, pattern-based mappings that signal states such as resting and regeneration. Audience surveys indicate that viewers perceived links between musical turns and kinetic gestures while requesting clearer explanatory cues. We document the open-source hardware, firmware, and control protocols of the Generic Pan Tilt platform and reflect on design tradeoffs for accessibility, reliability, and expressivity. Finally, we outline a real-time analysis toolchain–motif detection, parallelism, and continuous energy/tension estimators–that emits OSC triggers for lighting, media, kinetic, and spatial-audio systems, enabling reactive shows beyond timecode. Together, these systems advance performable visualizations of human-improvised and AI-driven musical futures.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164673</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Rules for LLM-Generated Code: A RealWorld Case Study</title>
<link>https://hdl.handle.net/1721.1/164672</link>
<description>Design Rules for LLM-Generated Code: A RealWorld Case Study
Lawrence, Jennifer M.
This thesis conducts a case study exploring the interaction between software design, extensibility, and LLM code generation. The central problem we investigate is whether LLMs violate software design principles in ways that introduce bugs and ultimately hinder extensibility. We examine several repositories belonging to the RealWorld collection, a project that demonstrates combinations of frameworks, database, and programming languages for building full stack web apps modeled on an existing social media application. We create a concept-based implementation of the RealWorld API. Concept Design defines software systems in terms of the abstract purposes and relationships of self-contained units of functionality. It enforces stringent design standards and aims to aid humans better understand complex software behavior. To test code extensibility, we develop three phases of new functionality to be added to the RealWorld API. Each phase is intended to mimic real-world software development, adding functionality that is commonly found in social media platforms while increasing nuance and complexity. The code for these extensions is generated by an AI agent, then reviewed by a human coder who classifies and fixes any bugs. In this study, we examine how LLMs interact with software paradigms like Concept Design, the kinds of design violations they produce, and whether these violations correlate with bugs that impede extensibility.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164672</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cognify: An On-Device, AI-powered Learning Assistant</title>
<link>https://hdl.handle.net/1721.1/164671</link>
<description>Cognify: An On-Device, AI-powered Learning Assistant
Huang, Siyong
Large Language Models (LLMs) have proven highly effective for a wide range of natural language processing tasks, but their size and compute requirements often restrict their use to powerful cloud-based infrastructures. In recent years, significant progress has been made in shrinking LLMs while maintaining performance levels comparable to much larger models. We are approaching the point where the capabilities of massive, multi-billion parameter models can be realistically replicated on consumer-grade devices. This thesis builds upon that foundation by developing an AI-powered note-taking application that runs entirely offline, using only the compute resources available on a personal laptop. The application is designed to listen to lectures alongside the student and provide support in real-time—through transcription, notes generation, and enabling context-aware search. Achieving this level of interactivity locally introduces challenges in reducing end-to-end latency, which this project addresses through both model-level optimizations and the design of efficient prompting and inference algorithms. A demo of the app can be found on Youtube.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164671</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Analysis of the Apple AMX Matrix Accelerator</title>
<link>https://hdl.handle.net/1721.1/164670</link>
<description>Performance Analysis of the Apple AMX Matrix Accelerator
Zhou, Jonathan
Apple Silicon integrates a dedicated Apple Matrix Coprocessor (AMX) that executes outer-product style computations with high throughput, but its public programming model remains largely hidden behind the Accelerate framework. This thesis turns AMX into a more predictable and practical target by combining (i) empirical throughput characterization, (ii) a case study on AMX specific matrix multiplication (GEMM) design, and (iii) an interpretable rule-based latency model that predicts cycle counts for short AMX instruction sequences. First, microbenchmarks quantify AMX load/store and compute limits across matrix and vector modes and data types. We analyze throughput in both GFLOPS and AMX instructions per cycle, and also observe output register based throughput limitations. Second, we develop an in-place GEMM that uses masked outer products and strategically overlapping tiles to avoid scratch buffers used by Accelerate, outperforming Accelerate while preserving simplicity. Third, we introduce a compact latency model that decomposes cycles into per-instruction BaseTime, symmetric SwitchLatency for instruction changes, and instruction FullLatency (data dependency) terms. Fitted with non-negative coordinate descent on length-2 loops and validated on length-3 sequences via a lightweight loop simulation, the model obtains reasonably high accuracy while remaining helpful for those trying to understand the architecture.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164670</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Identification and Network Measurement Error in Peer Effects Estimation</title>
<link>https://hdl.handle.net/1721.1/164669</link>
<description>Weak Identification and Network Measurement Error in Peer Effects Estimation
Wang, William Wei
The growing availability of social network data has enabled a surge of research on social interactions. In particular, peer effects, once considered unidentifiable, have now been shown to be identified given knowledge of the network structure. Despite this positive result, questions remain about the existence and nature of peer effects, due to concerns about identification strength and the reliability of network data. This work investigates two key threats to the estimation of peer effects: weak identification and network measurement error. We show that weak instrument problems arise in moderately dense networks due to rapid averaging, leading to slow convergence rates even when estimators remain consistent. On the measurement error side, we show that additive edge weight errors can be mitigated in such networks due to the same averaging phenomena, but the error remains a relevant threat to consistency in sparser networks. We further demonstrate that when both issues are present, the resulting estimators exhibit non-vanishing bias, suggesting that the combined effect of weak instruments and measurement error can be more severe than either problem in isolation. Overall, our results aim to clarify how these non-standard estimation challenges impact our ability to study peer effects using network data.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164669</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing Beyond Limits with Physics-Informed Priors</title>
<link>https://hdl.handle.net/1721.1/164668</link>
<description>Seeing Beyond Limits with Physics-Informed Priors
Liu, Yang
Conventional imaging systems are limited by dimensionality and visibility: standard sensors capture only two-dimensional data, while light diffuses or scatters across surfaces and through complex media. This dissertation reformulates imaging as an interplay of optical encoding and neural decoding. It models forward physical processes and iteratively refines them using deep denoisers. By embedding physics-informed priors into this optimization, it aims to surpass conventional limits in dimensionality and visibility. First, I develop Privacy Dual Imaging using an ambient light sensor. This approach tackles both dimensionality and visibility challenges when imaging with a single-point, non-imaging component on smart devices. Inspired by 1984’s “Big Brother” telescreen, I demonstrate how subtle light intensity fluctuations can reveal unseen image information; however, the goal is to highlight privacy concerns, not exploit them. It addresses two visibility limits—pixel-less and lens-less imaging—by using the screen as a spatial modulator and exploiting involuntary motion to create a virtual pinhole effect. A quantized, physics-informed prior improves reconstruction from heavily quantized sensor measurements. Second, I propose Snapshot Compressive Imaging (SCI) augmented with deep plug-and-play physics-informed priors to overcome the dimensionality limit of 2D sensors. SCI compressively encodes multiple temporal, spectral, or angular frames into a single measurement. A deep plug-and-play prior algorithm introduces high-dimensional priors learned from images and videos into the iterative reconstruction process, improving fidelity, speed, and flexibility. Experiments show notable gains in reconstruction quality and efficiency across different SCI datasets, including largeformat 4K UHD scenarios. Third, I introduce Rank-Reduced physics-informed priors, showing that large pretrained AI models—especially diffusion models—can act as general visual priors across both dimensionality and visibility challenges. A relax-then-tighten strategy handles ill-conditioning by applying truncated singular value decomposition to reduce rank deficiencies, followed by a Stable Diffusion refiner (SDEdit) plug-and-play prior that constrains reconstructions to valid image spaces. Simulations and passive non-line-of-sight imaging experiments verify the approach’s stability and effectiveness. Physics-informed priors promise to extend the boundaries of imaging, enabling us to see beyond current dimensionality and visibility limits and to unlock new applications from macro-scale to micro-scale observations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164668</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Large Language Models from a Data SystemsPerspective</title>
<link>https://hdl.handle.net/1721.1/164667</link>
<description>Optimizing Large Language Models from a Data SystemsPerspective
Chen, Peter Baile
Strong retrieval and reasoning capabilities are essential for large language models (LLMs) to effectively handle a broad spectrum of downstream tasks, such as open-domain question answering and solving math or science problems. While current LLM-based frameworks achieve strong performance on complex retrieval and reasoning tasks, they do so at a high computational cost. Additionally, they often lack structured, systematic problem-solving strategies, leading to unexpected failures. In particular, these models typically operate in an iterative, online, and isolated fashion—failing to exploit relationships across data sources, opportunities for offline computation, and the benefits of reusability—resulting in less-than-optimal outcomes. In contrast, traditional data management systems are engineered for both efficiency and accuracy, with careful coordination across all stages of the query pipeline. Inspired by these principles, this work proposes novel approaches to improve LLMbased retrieval and reasoning by incorporating optimization techniques from data systems. Our evaluation across a range of knowledge- and reasoning-intensive datasets demonstrates significant gains in both accuracy and computational efficiency.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164667</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundational Abstractions for Quantum Programming</title>
<link>https://hdl.handle.net/1721.1/164666</link>
<description>Foundational Abstractions for Quantum Programming
Yuan, Charles
Bringing the promise of quantum computation into reality requires not only building a quantum computer but also correctly programming it to run a quantum algorithm. To obtain asymptotic advantage over classical algorithms for applications including simulation, search, and optimization, quantum algorithms rely on the ability of data in quantum superposition to exhibit phenomena such as interference and entanglement. In turn, an implementation of the algorithm as a program must correctly orchestrate these phenomena in the states of qubits. Otherwise, it would yield incorrect outputs or lose quantum computational advantage.&#13;
&#13;
Given a quantum algorithm, what are the challenges and costs of realizing it as a program that can run on a physical quantum computer? In this thesis, I answer this question by showing how the basic abstractions of programming upon which many quantum algorithms rely – such as data structures and control flow – can fail to work correctly or efficiently on a quantum computer. I then demonstrate how we can leverage insights from research in programming languages to re-invent the software stack – including abstractions, libraries, and compilers – to meet the demands of quantum algorithms. This approach holds out a promise of expressive and efficient tools to program a quantum computer and thereby practically realize its computational advantage.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164666</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits</title>
<link>https://hdl.handle.net/1721.1/164665</link>
<description>Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits
Bui, Eric
The performance and scalability of superconducting quantum circuits depends critically on the microwave environment. Minimizing signal reflections and suppressing thermal noise are essential for achieving high-fidelity readout and preserving qubit coherence. A significant challenge arises from the use of conventional cryogenic components such as isolators and circulators, which exhibit nonideal out-of-band reflection characteristics. Reflections degrade impedance matching and limit the performance of broadband quantum limited amplifiers. Superconducting implementations of reflectionless microwave filters offer a promising solution to mitigate these issues. The focus of this work is the fabrication and cryogenic characterization of reflectionless filters compatible with superconducting qubit fabrication flows. Devices were implemented on high resistivity silicon substrates using aluminum ground planes, integrated nichrome resistors, and crossovers formed with SiO2 interlayer dielectric. Cryogenic measurements at 20 mK demonstrate high return loss, confirming the viability of these filters for co-fabrication with traveling-wave parametric amplifiers (TWPAs) and circuit quantum electrodynamics (cQED) architectures. The filters exhibit low insertion loss in the passband to maintain quantum measurement efficiency and provide broadband reflection suppression across frequencies relevant to superconducting qubits, offering a scalable way to manage microwave noise in superconducting quantum processors.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164665</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval</title>
<link>https://hdl.handle.net/1721.1/164664</link>
<description>CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval
Dongo Aguirre, Gyalpo Melchisedeck
Until now, state-of-the-art research into AI-driven clinical workflows has been confined to proprietary, closed-source systems from vendors like Epic and Oracle, or private experiments like Stanford’s ChatEHR, creating a critical barrier to academic innovation. This thesis introduces CONDOR, the first fully open-source and replicable research environment designed to simulate an agentic, conversational AI interacting with a high-fidelity Electronic Health Record (EHR). By integrating an open-source, FHIR-native EHR (Medplum) with a complex, realistic public clinical dataset (MIMIC-IV FHIR), CONDOR provides a foundational testbed that has been previously unavailable to the research community. The framework’s primary contribution is a novel alignment and evaluation methodology that adapts the principles of SelfCite to the clinical domain. We propose a ‘ClinicalConfidence‘ score to quantify the trustworthiness of generated statements and programmatically generate a high-quality preference dataset for alignment using Simple Preference Optimization (SimPO). We compare a standard vector-based Retrieval-Augmented Generation (RAG) baseline against a more advanced GraphRAG architecture that leverages a two-tiered knowledge graph of patient data and medical ontologies. Our results demonstrate that the full CONDOR system, combining GraphRAG with SimPO alignment, significantly improves citation quality and verifiability, establishing a new open-source benchmark for the development of safe and reliable clinical AI.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164664</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation</title>
<link>https://hdl.handle.net/1721.1/164663</link>
<description>Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation
Nair, Anushka Manchanda
As of 2025, social platforms have become a primary news source, magnifying the reach of misleading content [1]. Exposure to misinformation has been linked to shifts in public attitudes and behavior, including vaccine uptake [2] and voting behaviors [3]. However, current misinformation detection approaches can often focus on a narrow definition of misinformation: factual claims that can be clearly judged as true or false. However, recent research suggests the problem lies elsewhere: overt falsehoods (“vaccines contain microchips”) can carry little harm, while technically accurate but decontextualized narratives can be more influential. Allen et al. (2024) [4] found that factually accurate ”vaccine-skeptical” content had a much greater impact on vaccine hesitancy than misinformation flagged by fact-checkers. These narratives can work by omitting information, misleading framing, or cherry-picked evidence, forms of manipulation that can elude traditional fact-checking. Though professional fact-checkers are often able to recognize these tactics and the broader context of information, they cannot keep pace with the volume of online content. This thesis designs a Large Language Model (LLM) based pipeline meant to partner with, rather than replace, human fact checkers. The system decomposes content into its explicit and implicit claims, rhetorical tactics, and the “missing context” questions it raises; retrieves evidence from fact-check databases and reliable sources; and synthesizes grounded explanations while assigning calibrated harm scores to guide triage. Evaluated on fact-checked tweets, the pipeline matched expert judgments in 92.6% of cases where experts agreed, and flagged for review posts where experts disagreed, a gray zone requiring human judgment. The system’s explanations ranked higher than crowdsourced Community Notes in helpfulness, clarity, and trustworthiness when assessed by an LLM, and harm evaluations aligned with human reviewers in 87.5% of cases, enabling prioritization of content with greatest potential impact. Despite constraints of sample size and processing latency, the results demonstrate the feasibility of a human–AI workflow that treats disagreement as a signal and directs scarce attention towards high-impact misinformation that current automated systems can miss.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164663</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Simple Chemical Heuristics to Model and Discover Materials</title>
<link>https://hdl.handle.net/1721.1/164662</link>
<description>Learning Simple Chemical Heuristics to Model and Discover Materials
Ma, Andrew
Computational approaches have long played an important role in the field of materials science, driving both the scientific study of materials’ fundamental properties and the design of materials for technological applications. Currently, mainstream methods in computational materials science typically rely on either first-principles calculations or deep learning models. In this thesis, we take a different direction by developing remarkably simple data-driven models for predicting fundamental properties of materials, including electronic topology, metallicity, and band gap. These models take the form of highly interpretable chemical heuristics. A key finding of this work is the surprising result that electronic topology diagnosis – often regarded as a highly complex task – can, in fact, be performed heuristically using a simple and intuitive model. We further integrate this model into a workflow for discovering new topological materials. Altogether, this work revisits the classic idea of chemical heuristics through a modern data-driven lens, shedding new light on fundamental problems in materials science.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164662</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems</title>
<link>https://hdl.handle.net/1721.1/164661</link>
<description>Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems
Sneh, Tal
Recent advances in silicon photonics have yielded impressive results in fields including biophotonic optical tweezers and trapped-ion quantum systems. However, the majority of these demonstrations, while offering advantages in size, cost, and dense integration, lag behind their bulk-optic counterparts, limited by a lack of critical advanced functionality such as spatial control of light in the near field or polarization control at visible wavelengths. This thesis addresses this gap by designing and experimentally demonstrating the first, to the best of our knowledge, cell experiments using single-beam integrated optical tweezers, chip-based 3D printers, and integrated polarization rotators and splitters at blue wavelengths. First, we demonstrate optical trapping and tweezing of microspheres using a nearfield-focusing integrated optical phased array, at a standoff distance over two orders of magnitude larger than prior integrated demonstrations. We then use this system to perform the first cell experiments using single-beam integrated optical tweezers. Second, we use a tunable integrated optical phased array operating at red wavelengths to print designs in a visible-light-curing resin, demonstrating the first chip-based 3D printer. Third, we design and experimentally demonstrate the first integrated polarization rotators and splitters operating at blue wavelengths, enabling polarization control on chip for sophisticated integrated manipulation of trapped-ion and neutral-atom quantum systems. Finally, we develop key polarization-diverse integrated-photonics devices and utilize them to implement a variety of integrated-photonics-based polarization-gradient-cooling systems, culminating in the first demonstration of polarization-gradient cooling of a trapped ion by an integrated-photonics-based system.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164661</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Modeling from Visually Grounded Speech</title>
<link>https://hdl.handle.net/1721.1/164660</link>
<description>Language Modeling from Visually Grounded Speech
Lai, Cheng-I Jeff
Recent advancements in spoken language processing have significantly reduced automatic speech recognition (ASR) error rates, driven by large-scale supervised training on paired speech–text data and, more recently, self-supervised pre-training on unpaired speech and audio. These methods have facilitated robust transfer learning across diverse speech and audio tasks. However, fully leveraging multimodal inputs, particularly visual context, remains underexplored. This thesis addresses this gap by developing novel language modeling techniques directly from visually grounded speech. We first introduce the Audio-Visual Neural Syntax Learner (AV-NSL), an unsupervised parser that recovers constituency trees directly from raw speech paired with images, demonstrating how visual context effectively bootstraps grammar induction without textual supervision. Next, we investigate Audio-Visual Word Discovery for Speech Translation, using the Fisher Spanish–English corpus to train a series of speech-to-speech translation models based on pseudo-word units discovered via audio-visual grounding. This study highlights that simplistic acoustic tokens and limited training data degrade re-synthesis and translation quality, underscoring two crucial missing ingredients: richer semantic tokens and large-scale training. Guided by these insights, we present Audio-Visual Gemma (AV-Gemma), a family of multimodal foundation models that condition jointly on images and learned semantic speech tokens. At scale, AV-Gemma generates visually coherent spoken captions and transfers robustly to tasks such as video-to-speech generation and spoken visual question answering, significantly advancing multimodal spoken-language processing.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164660</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription</title>
<link>https://hdl.handle.net/1721.1/164659</link>
<description>ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription
Parthasarathi, Sruthi
In recent years, a wide range of computational techniques have been developed to extract information from recorded performances of Western music. However, these methods often achieve limited success when applied to non-Western musical traditions. Carnatic music, in particular, poses unique challenges due to the absence of a standardized notation system and the lack of a consistent mapping between frequency bands and note categories. This project introduces a dynamic programming–based transcription framework, incorporating novel methods for label estimation, contour segmentation, and related subtasks, and establishes the foundations for end-to-end automatic transcription of this art form.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164659</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Diverse Treatment Policies from Observational Health Data</title>
<link>https://hdl.handle.net/1721.1/164658</link>
<description>Modeling Diverse Treatment Policies from Observational Health Data
Ejilemele, Abe
Learning policies for real world tasks often requires modeling human behavior, especially in domains like healthcare and driving. In these settings, skills are learned from expert human demonstrations, but such data are typically multimodal, violating the common single expert assumption. We study sequential clinical treatment decision making in the offline imitation learning setting, where environment interaction is prohibited, reflecting the challenges of experimentation in safety critical domains. Existing methods for multi expert offline imitation learning often restrict the latent space, underspecify its structure, or omit objective terms that prevent latent collapse and encourage behavior discovery. We propose a fully offline approach that addresses these shortcomings and improves learning from multi expert demonstrations through modifications to the formulation of the latent approximate posterior and the model architecture. We suggest that our method is more robust to real world settings where the true number of demonstrators may not be known. We also incorporate an occupancy matching term into our objective that injects awareness of the rollout distribution over trajectories into our behavior cloning objective. We evaluate our method against baselines on both simulated multi expert demonstrations from an extended S-CVSim and real world demonstrations from MIMIC. Our approach achieves consistently higher next step action prediction and behavior discovery performance. While ground truth expert policies are unavailable for MIMIC, visual analysis shows our method uncovers clinically meaningful variations in expert strategies, reflecting treatment population diversity.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164658</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Assembly of General Objects</title>
<link>https://hdl.handle.net/1721.1/164657</link>
<description>Scalable Assembly of General Objects
Tian, Yunsheng
In this thesis, I present a scalable system towards fully automated and flexible robotic assembly that generalizes over diverse geometries and complex structures. Most real-world objects are assemblies composed of multiple parts. Assembly presents significant challenges for robots to execute long-horizon, contact-rich manipulation with both reliability and generalization. However, most manufacturing facilities today still rely heavily on manually programmed assembly lines, which require significant labor, time, and setup costs yet offer no flexibility to object variations. My proposed system synergizes global multi-step planning with local reactive learning-based control to enable generalizable and precise assembly. Such an integrated paradigm effectively leverages the best of both worlds, accomplishing results that neither planning nor learning could achieve alone. For planning, I leverage guidance from physical simulation and learned feasibility networks to efficiently search for part sequences, precise motions, and stable grasps for dual-arm robots over long horizons. For learningbased control, I train robust policies via reinforcement learning for submillimeter-level insertion across different part geometries, assembly paths, and grasp poses. I introduce and open-source the largest-scale assembly dataset to date and demonstrate my system’s generalization on thousands of simulated assemblies as well as through end-to-end real robot experiments. By integrating planning and learning, I showcase the first system to achieve complete and generalizable real-world multi-part assembly without domain knowledge or human demonstrations. Although the system plans and learns purely in simulation, it transfers zero-shot to the real world and achieves 80% successful steps. Finally, I will share insights that further scale up robotic assembly and opportunities to extend to general manipulation, and discuss future directions to equip general-purpose robots with multi-step, precise manipulation capabilities.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164657</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics</title>
<link>https://hdl.handle.net/1721.1/164656</link>
<description>Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics
Yankelevich, Beatriz
As the field of superconducting quantum computing advances, networking qubits within a single system becomes essential for building modular processors. Modularity allows the system to circumvent scalability constraints and enable architectures and computational schemes that exploit non-local connectivity to enhance processing capabilities. This work proposes non-local entanglement generation methods based on the theory of chiral quantum waveguide dynamics, which is the quantum-optical framework that describes systems of atoms coupled non-reciprocally to a continuum of modes. We leverage these effects to design a chiral communication module composed of multiple superconducting qubits, capable of both directional single photon routing and the realization of chiral, driven-dissipative entanglement protocols.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164656</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction</title>
<link>https://hdl.handle.net/1721.1/164655</link>
<description>Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction
Kim, Ji Won
Accurate prediction of antibody-antigen binding is a central challenge in computational immunology. Its direct implication for therapeutic antibody design and vaccine development has made it one of the most rapidly growing fields. Recent advances in protein language models and structure prediction have provided new tools for modeling, yet these approaches often fall short in capturing the fine-grained features that drive binding specificity in antibody and antigens. This thesis evaluates multiple strategies for improving predictive performance. First, we investigate a custom multiple sequence alignment (MSA) experiment. Standard Boltz-2 training relies on MSAs from broad protein databases, which capture global diversity but under-represent lineage-specific constraints. To address this, we constructed antibody-specific MSAs to test whether restricting the search space to antibody repertoires improves model learning. Unfortunately, gains in downstream binding prediction were limited, suggesting that further work needs to be done in training models for specific databases in the first place. Our second line of investigation focused on fine-tuning Boltz-2, a generative structural foundation model, using curated antibody–antigen data. By leveraging Boltz-2’s internal sequence embeddings, we trained a predictive model for binding affinity. This approach yielded stronger ROC performance compared to baseline models, achieving a validation AUROC of 0.645, demonstrating the advantages of structural generative priors for antibody–antigen binding prediction.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164655</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deterministic Circuit Range Avoidance is (Likely) Intractable</title>
<link>https://hdl.handle.net/1721.1/164654</link>
<description>Deterministic Circuit Range Avoidance is (Likely) Intractable
Ilango, Rahul
Circuit Range Avoidance (denoted Avoid) is a computational problem where, given a Boolean circuit with more output bits than input bits, one must output a string outside of the range of the circuit. A simple counting argument implies that such a string must always exist and also guarantees that outputting a uniformly random string is correct with good probability. A natural question is whether this can be derandomized: does there exist an efficient deterministic algorithm for Avoid? We give the first evidence that deterministically solving Avoid is intractable. We show that there is no polynomial-time algorithm for Avoid under plausible assumptions in complexity theory and cryptography. Specifically, our assumptions are that NP ≠ coNP and that subexponentially-secure indistinguishability obfuscation exists.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164654</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Tackle Task Variations in Control - A Transportation Context</title>
<link>https://hdl.handle.net/1721.1/164653</link>
<description>Learning to Tackle Task Variations in Control - A Transportation Context
Jayawardana, Vindula Muthushan
Real-world control tasks are messy and often exhibit task variations. Practical solutions to these problems must exhibit generalization across task variations. For example, in the task of controlling traffic signals, control strategies must adapt to different intersection topologies (the variations), each with distinct dynamics. In this thesis, we consider the challenge of coping with task variations in the context of transportation problems, specifically in roadway interventions where many such variations are both common and imperative to handle. We develop machine learning techniques to address three key challenges: 1) quantify the impact of task variations in control, 2) model them to align with the real world, and 3) optimize in the presence of them. To this end, we begin with a large-scale case study of cooperative eco-driving and illustrate how explicitly modeling task variations can surface otherwise overlooked insights. Building on this, we argue for the necessity of formally incorporating task variations into problem specifications, emphasizing that task underspecification due to loosely defined task variations can severely impair decision-making. We then introduce a contextual reinforcement learning algorithm capable of leveraging the structure of task variations to generalize effectively in cooperative eco-driving with autonomous vehicles. We also present IntersectionZoo, a benchmark designed to promote the development of learning algorithms that generalize by exploiting task variation structures, thus standardizing progress in the field. Last, we explore task variation modeling through a generative modeling lens, using human driver behavior modeling as a case study. Overall, this thesis lays the groundwork for robust control methods by leveraging machine learning to tackle task variations, specifically in roadway intervention designs.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164653</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities</title>
<link>https://hdl.handle.net/1721.1/164652</link>
<description>An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities
Ranade, Esha
Large Language Models (LLMs) have achieved remarkable success in natural language processing tasks and are increasingly being used for language generation. Significant advancements in this field have unlocked capabilities that enable their adoption in sophisticated roles, including acting as evaluators or "judges" of text for various attributes such as factuality, relevance, fluency, and reasoning quality. However, their understanding and ability to assess subjective attributes, such as the level of formality in a piece of writing, and produce content matching these subjective attributes remains unclear and underexplored. This research develops a methodology to study how LLMs evaluate subjective attributes. It has three primary contributions: (i) a reproducible user study to generate human-annotated labels for different attributes, (ii) an analysis of the extent to which different LLMs provide subjective labels aligned with human annotators, and (iii) an analysis of the extent to which LLMs generate content aligned with specified intended subjective labels, relative to humans. The user study and the analyses have been conducted both with and without a reference scale. The scale itself, the survey design, and the evaluation questions have all undergone multiple rounds of iteration informed by study tester feedback to improve clarity, consistency, and reliability for the final study. Comparisons between human-generated ratings and LLM-generated ratings for both human-generated content and LLM-generated content reveal the extent to which LLMs align with human judgment, providing insights into their capabilities and limitations. While humans typically do better in their roles, LLMs are able to attain reliably high levels of success in producing and judging text, despite tending to err on the more-formal side. Both groups’ performance increases significantly with the aid of a formalized reference scale. Across the suite of models tested, OpenAI’s GPT family leads overall performance, with Anthropic’s Claude and Meta’s LLaMA series showing notable strengths in specific formality ranges. Although this work focuses on the formality attribute of text, the methodology developed can be used to evaluate other subjective qualities of text, such as conciseness, usefulness, or persuasiveness. Ultimately, these findings may guide future efforts to fine-tune LLMs to produce text that more precisely matches the desired stylistic or ethical standards.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164652</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Burst Parallelism of SigmaOS processes with CRIU</title>
<link>https://hdl.handle.net/1721.1/164651</link>
<description>Accelerating Burst Parallelism of SigmaOS processes with CRIU
Tang, Frederick
σOS is a multi-tenant cloud operating system designed to integrate the agility of serverless environments with the interactivity of microservices. A goal of achieving this integration is the ability to start new instances of server processes quickly. However, σOS only handles σcontainer initialization, and does not assist with runtime and app initialization costs. One approach to overcome this challenge is to checkpoint processes using Checkpoint/Restore in Userspace (CRIU). CRIU is a linux toolset which can start new server instances by restoring them from a saved checkpointed state, avoiding the full cost of reinitialization and setup. This thesis introduces σCRIU, which adapts CRIU for burst-parallel spawning of microservices in σOS. σCRIU implements a number of optimizations: compressing checkpointed proc metadata to reduce network communication costs, implementing demand-paging using a lazy page service, and caching kernel metadatadata to reduce CRIU’s restore operation latency. These optimizations allow σCRIU to start new microservices on remote machines quickly while still making use of CRIU’s existing proven checkpoint and restore technology.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164651</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modern methods for causal inference and missing data</title>
<link>https://hdl.handle.net/1721.1/164650</link>
<description>Modern methods for causal inference and missing data
Xia, Eric
The proliferation of data-driven approaches in a wide array of settings is one of the defining characteristic of the modern era. With this rise, there has been much focus on using data to answer causal questions, e.g. whether A causes a change in B. Furthermore aspects of data collection has given rise to datasets that are often quite messy, sometimes missing important entries. These are both problems that are incredibly relevant to practitioners in a variety of disciplines, including policy-makers looking to make critical decisions that can influence lives of many. On the surface these problems seem quite distinct, yet the literature has highlighted deep connections between these two settings. Indeed, many methods for addressing one question can often be repurposed to address the other. These two settings are quite classical and approaches to address the are still quite so, but there has been great interest recently to develop techniques and algorithms to address them that harness modern developments in statistics and machine learning. This thesis contributes to the literature by providing new methods as well as novel understandings of existing ones.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164650</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)</title>
<link>https://hdl.handle.net/1721.1/164649</link>
<description>Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)
Gosalia, Mehek
This work introduces a novel pipeline for scene reconstruction that jointly prioritizes semantic accuracy and visual fidelity, addressing a gap in current approaches. Prior pipelines often emphasize either semantic analysis or photorealistic rendering, but rarely both. This method combines scene analysis, segmentation, and retexturing to yield reconstructions that preserve structural semantics, while convincingly reflecting the visual qualities of the original image. The motivation lies in the limitations of existing systems. Existing databaseassisted approaches depend on proprietary datasets that restrict stylistic diversity or using in-the-wild assets. This constrains expressiveness and often produces results that are visually misaligned. Conversely, pipelines optimized for visual realism neglect semantic correctness, generating outputs that may appear plausible but lack categorical or structural grounding. Our framework addresses this by first enforcing semantic accuracy via selecting database assets, then editing those assets to be stylistically faithful to the reference, producing reconstructions that are both interpretable and expressive. We begin with database-assisted scene analysis, using an open-source asset database containing chairs, lamps, sofas, tables, and benches. Input images are depth-mapped, segmented, and parsed into object masks, which are matched to database assets based on semantic labels and visual correspondence. Each asset is broken into semantic segments and rescaled per-component using vision-language model predictions to match the reference object better. Finally the asset is retextured based on the image mask of the reference object in the input image. Evaluation on six diverse scenes—both photographs and artworks—shows the pipeline produces semantically grounded, visually accurate reconstructions under non-research conditions. Future work will focus on expanding the asset database, reducing reliance on proprietary texturing, and releasing an open-source implementation to broaden accessibility.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164649</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization</title>
<link>https://hdl.handle.net/1721.1/164648</link>
<description>Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization
Wang, Janet Z.
Singlet fission (SF)-sensitized silicon (Si) solar cells offer a path towards surpassing the Shockley-Queisser efficiency limit for single-junction solar cells. However, realizing efficient charge transfer from the SF material to Si remains a significant challenge that requires careful interface engineering. Prior work showed that Si microwire cells sensitized with tetracene (Tc) and a zinc phthalocyanine (ZnPc) donor layer can boost photocurrent and external quantum efficiency (EQE). Planar devices are simpler to fabricate than microwire devices and reproduce the planar geometry of optical test samples to connect studies of the interface to device performance. This thesis integrates modeling and experimental approaches to guide the design of planar SF-sensitized Si solar cells. We developed a fabrication process for planar cells comparing varied oxide passivation layer growth conditions and surface treatments, Si(100) versus Si(111) orientation, and junctions formed by diffusion doping versus ion implantation. Complementary surface photovoltage (SPV) measurements on matching optical stacks show evidence of an illumination-induced transient positive charge density at the Tc/ZnPc/oxide/Si interface, consistent with increased field effect passivation. We find that SPV responses on AlOx/n-Si are dominated by substrate band bending; consequently, SiOx is the preferred passivation to suppress the background and isolate the SPV signals driven by the organics. A drift–diffusion model shows that the diffusion doping (exponential) emitters reduce surface recombination rates compared to ion implantation (Gaussian) emitters. We also show that a positive fixed charge density at the surface enhances short wavelength EQE, with the effect strongest for Gaussian emitters. Together, these results provide practical design rules for planar SF-sensitized Si cells and the study of charge transfer at organic-Si interfaces.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164648</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microsecond Time Synchronization for Computing Fiber&#13;
Networks</title>
<link>https://hdl.handle.net/1721.1/164647</link>
<description>Microsecond Time Synchronization for Computing Fiber&#13;
Networks
Li, Jenny Y.
We present a microsecond-accurate time synchronization method and time localization system for a sensor network of spatially-separated, low-power Bluetooth nodes, with the goal of integrating this system into thermally-drawn computing fibers. Each node consists of an nRF54L15 SoC paired with an ICS-43434 digital I2S microphone, enabling synchronized audio data collection. Our design leverages Bluetooth LE connection events to synchronize local clocks with sub-10 µs accuracy across a multi-peripheral topology; we trigger precise, CPU-independent hardware events to timestamp audio samples. We demonstrate that timestamped I2S data stored in external SPI flash can be correlated across devices to extract TDoA measurements for localizing sound sources. Cross-correlation techniques allow us to estimate direction and position, with localization errors reduced from 4.17 m to 0.39 m through clock synchronization. This prototype provides a roadmap for embedding synchronized sensing and computation within fibers and smart textiles, with implications for on-body audio perception and distributed sensing in flexible electronics.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164647</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From String to Structure: Graph Threading for Physical Assembly</title>
<link>https://hdl.handle.net/1721.1/164646</link>
<description>From String to Structure: Graph Threading for Physical Assembly
Lin, Rebecca Y. E.
Many artistic and engineering applications—from beadwork to deployable structures—create intricate, and sometimes dynamic, designs by threading cord through tubular components. We model the underlying design challenge—threading tubes so that they achieve a target connectivity when the string is pulled taut—as graph threading. In this formulation, tubes and their junctions correspond to edges and vertices of a graph, and the goal is to find a closed walk that induces a connected graph at every vertex while avoiding U-turns. We study two optimization objectives motivated by fabrication and deployment: minimizing length to reduce material cost and assembly time, and minimizing turn to reduce frictional resistance during deployment. For the length metric, we present a polynomial-time algorithm via reduction to minimum-weight perfect matching, prove tight worst-case bounds on optimal threadings, and identify special cases with faster algorithms. For the turn metric, we characterize the complexity landscape, proving NP-hardness for graphs of maximum degree 4, tractability for degree 3, and giving exact and approximation algorithms for restricted variants, including rectangular grid graphs. Finally, we turn from theory to fabrication, proposing multi-configuration threading—a new approach for achieving multiple predetermined configurations within a single system. As in earlier chapters, framing the problem in graph-theoretical terms provides access to powerful problem-solving techniques, guiding both algorithmic analysis and physical design.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164646</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steering Vision at Scale: From the Model Weights to Training Data</title>
<link>https://hdl.handle.net/1721.1/164645</link>
<description>Steering Vision at Scale: From the Model Weights to Training Data
Materzyńska, Joanna
We study the interpretability and controllability of multimodal and generative models, with a particular focus on text–image representation models and text-to-image diffusion systems. We begin by addressing limitations in CLIP’s multimodal embeddings, specifically the entanglement between visual and textual concepts within images. We demonstrate the consequences of this entanglement in both generative and discriminative tasks, and introduce a method for disentangling visual and textual representations. We showcase the utility of these disentangled embeddings in typographic attack resistance, improved image generation, and robust out-of-domain OCR detection. Building on this foundation, we explore methods to enhance the controllability of diffusion models. First, we tackle the challenge of unwanted concept generation. We introduce a technique to remove specific visual concepts using only their names, leveraging negative prompts and guidance to suppress target content without modifying training data or requiring model retraining. This approach enhances ethical alignment and enables greater user control in generative systems. We then turn to the complementary problem: incorporating new concepts. We present a few-shot motion customization technique for video generation models, which transfers motion patterns from a small set of examples to novel subjects. This method maintains the generalization capabilities of the base model while enabling consistent, subject-agnostic animation that preserves both identity and temporal coherence. To improve the fine-grained control of visual outputs, we propose a method for continuous manipulation of image attributes. This framework introduces smooth, intuitive controls, that allow for dynamic, continuous steering of generated images. Unlike prompt engineering or token-level interventions, our approach offers real-time adjustment without sacrificing output realism. Finally, we examine whether artistic styles in diffusion models require large-scale pretraining or can be learned in a lightweight, post-training manner. To this end, we train a base model on art-free data and introduce a compact adapter method that learns stylistic concepts from a small set of exemplar artworks. Our findings suggest that artistic domains can be integrated efficiently and ethically, without reliance on web-scale scraped datasets.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164645</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction</title>
<link>https://hdl.handle.net/1721.1/164644</link>
<description>A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction
Khoo, Ling Min Serena
Elucidating the structure of small molecules from complex mixtures using liquid chromatography tandem mass spectrometry (LC-MS/MS) is a challenging task with far-reaching implications in many areas such as drug discovery, environmental science and metabolism research. Yet, despite its importance and significant efforts to develop machine learning (ML) models for the task of elucidating the molecular structures of unknown compounds from LC-MS/MS spectra, the performance of these ML-based models remains limited. As a result, the performance of current ML-based models has been reported as insufficient for practical applications, thereby warranting a deeper investigation into their limitations to advance ML-based molecular structure elucidation from LC-MS/MS and enable their utility in real-world settings. Here, we leverage data attribution methods to systematically identify and validate hypotheses about the sources of generalization challenges that hinder current model performance. Our goal is to automatically uncover insights into the failure modes of existing ML models for LC-MS/MS, thereby laying the foundation for developing more robust and accurate models.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164644</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Dynamic Objects in Scenes with Generative Particle Systems</title>
<link>https://hdl.handle.net/1721.1/164643</link>
<description>Modeling Dynamic Objects in Scenes with Generative Particle Systems
Li, Eric
Humans readily interpret the motion of deformable and rigid bodies, even when encountering unfamiliar objects with minimal shape or texture cues. In such cases, motion serves as a critical signal for recognition and understanding. Inspired by this ability, we propose a generative model that represents 3D matter as small Gaussians (“particles”) drawn from clusters capturing groups of coherently moving matter. We develop an e!cient inference algorithm based on parallelized block Gibbs sampling to recover stable particle motion and rigid groupings. Our model provides a tractable, object-centric generalization of as-rigidas-possible (ARAP) regularizers used in motion tracking. To assess alignment with human perceptual judgments, we test our approach on random dot kinematograms—sparse motion displays in which dot trajectories convey latent object structure, often used to probe visual understanding of motion and grouping. In this setting, our approach captures human-like responses, including graded patterns of uncertainty across ambiguous conditions. Applied to naturalistic RGB videos, it infers dense particle representations that track object motion and deformation over time. These results demonstrate that our model enables persistent latent scene structure suitable for object-level reasoning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164643</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Arm Qubit for Faster, Higher Fidelity Readout and Gates</title>
<link>https://hdl.handle.net/1721.1/164642</link>
<description>The Arm Qubit for Faster, Higher Fidelity Readout and Gates
Kline, Jeremy B.
Currently, superconducting qubit processors are bottlenecked by errors during two-qubit gates, readout, and idle time. All three error contributions could be reduced if we improved the speed of operations (without introducing additional leakage errors) compared to the qubit lifetime. Readout and two-qubit gates are multimode interactions and therefore are limited by the coupling strength between the modes. In this thesis, we introduce a two-mode superconducting qubit which uses one mode to facilitate strong coupling to other modes of the quantum processor and one mode to store data with high coherence. Simulations show that this architecture could enable order-of-magnitude reductions in error during readout and two-qubit gates.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164642</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clustering Algorithms for Component Placement in Printed Circuit Boards</title>
<link>https://hdl.handle.net/1721.1/164641</link>
<description>Clustering Algorithms for Component Placement in Printed Circuit Boards
Petrusenko, Vlada
In 2024, approximately 12 billion printed circuit boards (PCBs) were manufactured globally [1], with the trend increasing gradually, and the majority of PCB layouts still being completed manually. The manual design process amounts to millions of hours of tedium that can be eased with automation. One of the biggest challenges is that the complex Printed Circuit Board designs typically have hundreds, sometimes thousands of components and even more net connections between them. This makes both manual and automated placement very time-consuming. As a way to improve placement performance, in this thesis, we constructed a custom weighted undirected graph representation of components and nets for any board that would encode physical and electrical constraints. Additionally, we integrated the Louvain and Leiden clustering algorithms for component clustering in PCB placement. We also showed comparative metrics with the spectral clustering algorithm applied to unweighted graph representations, which is the prior state of this project, but it has no knowledge of electrical and physical constraints associated with PCB designs and would thus produce results that require more manual correction. This new clustering approach was able to generate more optimal clustering and reduced average runtime by 51.05%, decreased estimated length of routing by 7.72%, and improved component association score by 12.8%.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164641</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Data Drives ML Models Performance</title>
<link>https://hdl.handle.net/1721.1/164640</link>
<description>How Data Drives ML Models Performance
Khaddaj, Alaa
Data has been been playing an increasingly more important role in the machine learning (ML) pipeline. This thesis deepens the understanding of the effect of the data on model performance and reliability. First, we study how choice of training data affects model performance. We consider a transfer learning setting and present a framework for selecting from a large pool of data a pretraining subset that improves model performance on downstream tasks. Our approach, however, requires training multiple target models which becomes prohibitively expensive at large-scale. To that end, we explore using smaller—and cheaper—proxy models to approximate large model behavior and select the pretraining data using that cheaper model. We show the effectiveness of this approach in two dataset selection settings: language modeling and imitation learning. Second, we explore the role of data in model reliability and consider two different threat models: backdoor attacks and malicious data editing. In this first threat model, an adversary injects a few doctered samples into the training set to control model predictions at inference time. We study the effect of these malicious samples on model behavior and then propose a framework for detecting and removing them from the training data. In the second threat model, an adversary leverages generative models such as diffusion models to maliciously modify personal data and generate harmful digital content. We focus on image editing and investigate how we can imperceptibly modify personal images to mitigate editing using diffusion models and raise and the cost of hamrful content generation. Overall, this thesis contributes to the understanding of the role of the data in driving model behavior. Through these efforts, we aim to provide mechanisms for (i) training models that perform better and (ii) are more reliable when deployed in the real world.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164640</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Overcoming Optimization Barriers in Non-convex and Non-smooth Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164603</link>
<description>Understanding and Overcoming Optimization Barriers in Non-convex and Non-smooth Machine Learning
Gatmiry, Khashayar
At their core, our machine learning systems are trained by solving an optimization problem, where the goal is to minimize a predefined objective function by adjusting model parameters based on the data. Despite the wealth of structure and prior knowledge present in the data and feedback, our training methods remain relatively simple and independent of this structure. In spite of, or perhaps because of, this simplicity, these methods are often lacking in theoretical guarantees. To design machine learning algorithms that are less data-hungry while ensuring theoretical guarantees on both computational efficiency and output validity, it is essential to better understand and leverage the rich structure within the learning setup and the data distribution, e.g. by altering the geometry of the solution space or adjusting the objective function to induce a more effective learning procedure. This approach moves beyond classical algorithm design, which focuses primarily on handling worst-case instances. This thesis investigates the optimization landscape of central learning problems and develops geometric and analytic schemes adapted to their structure, leading to algorithms with superior computational and statistical performance. In addition, it seeks to advance our mathematical understanding of the principles underlying the success of deep learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164603</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis</title>
<link>https://hdl.handle.net/1721.1/164602</link>
<description>Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis
McGreivy, James C.
Generative Large Language Models (LLMs) are a promising approach to structuring knowledge contained within otherwise unmanageable corpora of research literature produced by large-scale and long-running scientific collaborations. Within experimental particle physics, such structured knowledge bases could expedite methodological and editorial review. Complementarily, within the broader scientific community, generative LLM systems grounded in published work could make for reliable companions allowing non-experts to analyze openaccess data. Techniques such as Retrieval Augmented Generation (RAG) rely on semantically matching localized text chunks, but struggle to maintain coherent context when relevant information spans multiple segments, leading to a fragmented representation devoid of global cross-document information. In this work I utilize the hierarchical organization of experimental physics articles to build a tree representation of the corpus, and present the SciTreeRAG system which leverages this structure with the aim of constructing contexts more focused and contextually rich than a standard RAG. Additionally, I develop methods for using LLMs to transform the unstructured corpus into a structured knowledge graph representation. I then implement SciGraphRAG, a retrieval system that leverages this knowledge graph to access global cross-document relationships eluding standard RAG, with the goal of encapsulating domain-specific connections and expertise. I demonstrate proof-of-concept implementations of both systems using the corpus of the LHCb experiment at CERN.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164602</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications</title>
<link>https://hdl.handle.net/1721.1/164601</link>
<description>Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications
Gower, Elizabeth Ann
Anthropogenic activity has increased atmospheric carbon dioxide (CO₂) levels, disrupting the global carbon cycle and driving widespread environmental change. The ocean acts as a major sink. Accurate and scalable in situ monitoring of oceanic carbon chemistry is vital for understanding the impacts of climate change and informing marine carbon dioxide removal (mCDR) strategies. Many existing in situ instruments for marine applications are constrained by their size, cost, power requirements, or reliance on consumable reagents. Developing low-cost, compact, low-power, and accurate in situ sensors would significantly enhance the spatiotemporal resolution of oceanographic data and enable widespread monitoring of dissolved gases throughout the ocean. This, in turn, would deepen our understanding of how, where, and when changes are occurring within the marine carbon cycle. Two key variables essential for studying this cycle are the partial pressure of carbon dioxide (pCO₂) and dissolved inorganic carbon (DIC). This thesis presents the development of two sensors, one for in situ pCO₂ measurement and another for novel DIC quantification, both designed to be affordable, reliable, and scalable tools for advancing our understanding of ocean chemistry and the global carbon system. First, the development, calibration, and open-ocean deployment of a miniaturized Dissolved Multi-Gas Sensor (DMGS) that measures pCO₂ and partial pressure of oxygen (pO₂) is presented. The sensor was integrated into a custom-built surface drifter designed to entangle with Sargassum mats and send data autonomously. The drifter utilized commercial off-theshelf (COTS) components and cost roughly $1000 to build. After lab testing, a drifter was deployed in the Great Atlantic Sargassum Belt (GASB) and collected data for 22-days. In addition to gas data, the drifter tracked temperature, light intensity, humidity, pressure, and location sending measurements via an Iridium satellite. The resulting data captured dynamic changes in localized gas concentrations, temperature, and light levels that highlighted photosynthetic and respiratory activity within Sargassum patches. These drifters demonstrate the value of in situ data to investigate marine biogeochemical processes that contribute to the marine carbon cycle, especially in areas with high biologic activity. Next, this thesis presents the iterative development of a novel DIC sensor with potential for future in situ applications. Initial prototypes tested the feasibility of using a COTS CO2 sensor in both static and flow-through configurations, however sensor saturation issues prompted a shift to a pressure-based detection method. Multiple test setups were evaluated for pressure stability and sensor sensitivity, culminating in a bottle-based flow system that demonstrated the potential for reagent-minimized, pressure-based DIC quantification. With the final setup, a COTS pressure sensor that sat behind a gas permeable membrane was found to repeatably and accurately quantify DIC from acidified seawater. This approach of quantifying DIC via pressure change is novel in the field of gas sensing and maintains a low-cost, accessible design. Together, the sensors developed in this thesis expand the toolkit for marine carbon monitoring and provide a foundation for affordable, distributed sensing networks. These technologies enable higher-resolution insights into ocean biogeochemistry and support critical monitoring, reporting, and verification (MRV) frameworks needed to evaluate the effectiveness of mCDR techniques. Continued refinement of these low-cost platforms could play a key role in understanding and mitigating anthropogenic impacts on marine systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164601</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View</title>
<link>https://hdl.handle.net/1721.1/164600</link>
<description>Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View
Firouzian, Fardean
This thesis applies Reference Class Forecasting (RCF) to multifamily real estate underwriting as a means of countering optimism bias, strategic misrepresentation, and other distortions embedded in the traditional “inside view.” Adapted from its proven application in infrastructure and corporate capital budgeting, RCF anchors projections in the actual performance distributions of comparable assets rather than in deal-specific narratives. The research centers on the development of the “Comp Warehouse,” a structured repository of property-level financials organized by market, asset class, vintage, and unit scale. By benchmarking assumptions against statistically valid reference classes, the approach enforces empirical discipline and highlights opportunities for “operational alpha”—the marginal increase in net operating income (NOI) achieved when underperforming assets converge on median peer performance. A South Florida case study demonstrates the method’s utility in an acquisition context. Analysis of 48 assets across Melbourne, Miami, Fort Lauderdale, and West Palm Beach shows that while rent levels cluster tightly around market medians, operating expenses vary widely, producing large dispersion in realized NOI. Applying the framework to a 191-unit Class A property in Fort Lauderdale illustrates how RCF can ground underwriting assumptions by distinguishing between defensible revenue-driven growth strategies and less plausible expense-reduction projections proposed in a bidding scenario. Recognizing constraints of both scale and frequency, this thesis also explores artificial intelligence as a tool for automating the ingestion and standardization of operating statements and rent rolls. Properly deployed in a human-in-the-loop framework, AI can reduce data friction, expand sample sizes, and sharpen forecasting precision. The contribution of this thesis is twofold: it demonstrates the feasibility of applying RCF to the multifamily sector—an asset class whose relative standardization, liquidity, and data availability make it especially conducive to outside-view benchmarking—and it situates the methodology within a technology-native architecture designed to scale empirical discipline, enhance underwriting rigor, and systematically capture operational alpha.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164600</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications</title>
<link>https://hdl.handle.net/1721.1/164599</link>
<description>Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications
He, Kaiwen
Homomorphic secret sharing (HSS) is a powerful cryptographic primitive that enables efficient, low-communication secure computation without the use of fully homomorphic encryption. Public-key HSS is a well-known variant that supports inputs from multiple parties, but all parties must agree on a joint public key before any party can encode their inputs, requiring extra rounds of communication in applications. Recently, Couteau et al. (EUROCRYPT 2025) constructed multi-key HSS (MKHSS)—a new primitive which allows parties to encode their inputs under independent keys—under the DCR assumption. MKHSS assumes only a reusable common reference string, without the need for prior interactions between parties or a public-key infrastructure. In this paper, we construct and implement the first concretely-efficient MKHSS scheme under the same assumptions used by Couteau et al. Using an algorithmic insight that reduces the largest modulus in Couteau et al. from N⁴ to N², our optimized implementation can homomorphically multiply inputs in 5.0 milliseconds—while an implementation of Couteau et al. requires 224.6 milliseconds—thereby achieving a 45× speedup. A powerful application of MKHSS is to realize attribute-based non-interactive key exchange (ANIKE), which generalizes password-based key exchange (PAKE) to arbitrary attribute policies. ANIKE is currently only known from MKHSS. We use our implementation to evaluate the first concretely-efficient ANIKE schemes for a range of practically useful policies. Using our implementation, two parties can perform a geolocation-based key exchange in 1.65 seconds and a fuzzy PAKE on an 8-word passphrase in 7.59 seconds for realistic parameters, on a single core. Compared to using Couteau et al., which requires 62.5 and 253 seconds, we achieve 38× and 33× speedups, respectively.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164599</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reciprocity and Normality in the Scattering Matrix of Disordered Media</title>
<link>https://hdl.handle.net/1721.1/164598</link>
<description>Reciprocity and Normality in the Scattering Matrix of Disordered Media
Bharadwaj, Shreyas K.
The scattering matrix formalism provides a practical characterization of wave transport in linear, source-free systems by relating a set of operationally defined input and output spatial channels. The matrix is structured as a block operator, with diagonal blocks encoding same-side reflection matrices (RMs) and off-diagonal blocks encoding transmission matrices (TMs) in opposing propagation directions. Under Helmholtz reciprocity, symmetry relations are imposed: RMs are symmetric, and forward and reverse TMs are mathematical transposes of each other. These relations were employed as constraints to correct system-induced aberrations in measured scattering matrices of complex optical media via a matrix-based gradient descent procedure. Resulting phase corrections corresponded closely with classical aberration modes without heuristic parameterizations, suggesting that these modes naturally arise to restore reciprocity-induced symmetry. Vectorial TMs were measured for single- and double-pass propagation through step-index MMFs and scattering samples, with corrected phase terms showing agreement across sample types. Furthermore, matrix normality was introduced as a descriptor of stable modal transport. Normal matrices admit unitary diagonalization, reflecting orthogonal eigenchannels and spectrally coherent propagation. Near-normal behavior was observed in fiber TMs, while RMs of scattering slabs remained strongly non-normal, as quantified by a normalized Henrici departure. Sufficient conditions for normality were identified in terms of the system Green’s function and its bi-compression onto the measurement basis. A complementary dispersion experiment investigated two regimes: nearly-normal MMFs, where the Wigner–Smith time-delay operator was jointly diagonalizable and supported accurate first-order spectral models; and mechanically compressed fibers, where loss of normality produced noncommuting operators and collapse of model fidelity. These results suggest that normality captures well-behaved modal transport, underpinning the validity of parametric models and other operator-based analyses of disordered media. Together, reciprocity and normality impose complementary constraints on wave transport: reciprocity governs global symmetry, while normality captures internal coherence of modal propagation. Relevance is noted for matrix-based imaging, inverse scattering theory, and non-Hermitian wave physics, where symmetry and modal stability remain central.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164598</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mesh Differentiable Rendering for Real-World Scenes</title>
<link>https://hdl.handle.net/1721.1/164597</link>
<description>Mesh Differentiable Rendering for Real-World Scenes
Charatan, David
Differentiable rendering has established itself as an effective tool for 3D reconstruction and novel view synthesis. Most state-of-the-art differentiable rendering methods use purpose-built renderers to optimize specialized, nonstandard 3D representations. However, most downstream applications of differentiable rendering rely on 3D meshes, which are near-universally supported due to their suitability for a wide range of rendering, simulation, and 3D modeling workflows. While prior methods have explored using 3D meshes directly within gradient-based optimization, they have been limited to object-centric scenes and cannot reconstruct real-world, unbounded scenes. This work addresses this shortcoming via a differentiable rendering formulation that combines an off-the-shelf, non-differentiable triangle rasterizer with a 3D representation that consists of nested mesh shells. During every forward pass, these shells are extracted from an underlying signed distance field. Then, the shells are independently rasterized and the resulting images are alpha-composited using opacities derived from the shells' per-vertex signed distance values. Notably, the shells' vertex positions are updated only via the underlying signed distance field, not via backpropagation through the rasterizer itself. This makes our method compatible with off-the-shelf, non-differentiable triangle rasterizers. To the best of our knowledge, our method is the first differentiable mesh rendering method that scales to unbounded, real-world 3D scenes, where it produces high-quality novel view synthesis results whose quality approaches the quality of state-of-the-art, non-mesh-based methods. Our method's performance is also competitive with state-of-the-art surface rendering methods on object-centric scenes. Ultimately, our method suggests that it may be possible to solve the differentiable rendering problem using tools from the conventional graphics toolbox rather than relying on specialized renderers.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164597</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning</title>
<link>https://hdl.handle.net/1721.1/164596</link>
<description>Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning
Duguey, Gabriel
As we plan tomorrow’s electricity system, we face fundamental questions: where should new power plants go, which technologies deserve investment, and how much transmission is enough? These decisions are the domain of Capacity Expansion Planning (CEP), a class of optimization models that guide long-term infrastructure investments in power systems. To be realistic, CEP models must capture fine-grained spatial and temporal variations because demand varies by city and climate, while wind and solar output depend on weather patterns that shift hour by hour and location by location. But representing the system with thousands of time steps and hundreds of nodes makes the optimization problem computationally too large to solve. &#13;
&#13;
This thesis addresses the core question: how can spatial and temporal aggregation in CEP models be designed to preserve planning-relevant patterns that drive investment decisions? Existing approaches often treat aggregation as a neutral preprocessing step, relying on heuristics like political boundaries or geographic proximity. In contrast, we propose a task-aware pipeline that treats aggregation as an integral modeling decision, explicitly aligned with planning objectives.&#13;
&#13;
The approach builds a composite similarity metric that blends diverse planning-relevant signals, including, but not limited to, duration curves, ramping behavior, and spatial correlation, and uses k-medoids clustering to define spatial zones. Temporal aggregation is then applied to daily system-wide profiles, selecting representative days that maintain cross-zonal interactions. The result is a reduced spatio-temporal dataset fed into a CEP model. The resulting investment decisions are re-evaluated at full resolution to evaluate their feasibility and real cost.&#13;
&#13;
Experiments on a New England case study show the pipeline consistently outperforms common baselines like political boundaries, geographic proximity, or capacity factor statistics. Among 50 feature weightings, the best design reduces system cost by 13% compared to heuristics. Correlation-based features drive the best results, while raw amplitude and geographic location often degrade performance when used alone.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164596</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development</title>
<link>https://hdl.handle.net/1721.1/164595</link>
<description>Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development
McDonough, Kate
Duddington Farm is a 312-acre site north of Baltimore, Maryland. A stream restoration project was completed at the location nearly a decade ago in concert with the State of Maryland, the Manor Conservancy, Ecotone, and landowners Harry and Tara McDonough. The project was conducted with some success, however due to a lack of State oversight and long-term management provisions, the ecology has since declined. The following proposal outlines a new model for long-term land restoration and conservation, whereby land conservation and restoration are financed not solely through short term grants and fragile easements, but through the thoughtful use of modest real estate interventions. A small cluster of homes is developed on one portion of the site. The act increases the value of the land, generates equity, and establishes a permanent conservation fund. The design protects habitat and invites people into a deeper relationship with the natural world. The plan offers scalability in taking the land value capture and applying it to future land conservation projects, compounding returns and projecting a model to preserve hundreds of thousands of acres of critical land across the United States. This model highlights Indigenous ecological knowledge (TEK) and traditional practices of engaging with the land, highlighting a deeper understanding of how humans and nature can coexist in mutually healthy ways. The model is designed at a time when watersheds, national parks, and old-growth forests are faced with the greatest threat to global ecology. Duddington Farm is used as a retrospective case, but the broader goal is to create a regenerative framework for conservation-based development across critical watershed regions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164595</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Causal Effects of Mandatory Quarterly Earnings Guidance on Corporate Information Environment and Corporate Short-Termism</title>
<link>https://hdl.handle.net/1721.1/164594</link>
<description>The Causal Effects of Mandatory Quarterly Earnings Guidance on Corporate Information Environment and Corporate Short-Termism
Wang, Yuting
I examine the causal effects of mandatory quarterly earnings guidance using a regulatory mandate in China that required a subset of listed firms to issue bundled quarterly earnings guidance from 2007 to 2018. A difference-in-differences analysis shows that when these firms are no longer required to issue such guidance, their corporate information environment deteriorates, evidenced by reduced analyst coverage, fewer site visits, and lower price timeliness, meaning that stock prices incorporate less information about current and future earnings. However, these firms increase R&amp;D and SG&amp;A spending, consistent with alleviated managerial myopia as short-term market pressure eases. These findings highlight the dual-edged nature of the mandatory quarterly earnings guidance and offer insights for both practitioners and policymakers.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164594</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Finite Elements</title>
<link>https://hdl.handle.net/1721.1/164593</link>
<description>Automated Finite Elements
Collin, Teodoro Fields
Finite element methods (FEMs) are a powerful and ubiquitous tool for solving engineering problems. Experimenting with different finite elements can improve the quality and efficiency of solutions. Furthermore, in some cases, the wrong (but nonetheless most common) choice of finite element will produce solutions which converge to the wrong answer regardless of mesh resolution. However, in practice, the choice of finite element is not explored due to the complexity of re-deriving and re-implementing finite element methods. Trying a new finite element is challenging because practitioners must manually deduce formulas to use these elements and they must implement these formulas within the context of a potentially complex system. We address this problem by introducing ElementForge, a finite element system that is parametric over the literate mathematical specification of a finite element in a domain-specific language (DSL). The ElementForge compiler reasons about tensor spaces, tensors, and tensor bases from first principles to derive implementations of finite elements. The ElementForge compiler is able to automatically derive implementations of finite elements previously only derived by hand. Further, ElementForge minimally couples several key mathematical concepts, mainly tensor fields, mesh topologies, sparse tensors, and assembled finite element operators, to produce a complete finite element system that is parametric over the choice of element. Consequently, the elements derived by the compiler can be applied parametrically to new meshes, PDEs, and boundary conditions. We evaluate our system by implementing several simulations with different finite elements, demonstrating that our system can explore tradeoffs in generality, accuracy, speed, and representational complexity. For example, we are able to implement the Morley, Bell, Argyris, and Hermite like elements with less than 50 lines of code and use them all in a single simulation.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164593</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bespoke Threat Models: Achieving Realistic Privacy Guarantees for Deployed Protocols</title>
<link>https://hdl.handle.net/1721.1/164592</link>
<description>Bespoke Threat Models: Achieving Realistic Privacy Guarantees for Deployed Protocols
Hogan, Kyle
This thesis focuses on the question of what degree of privacy is achievable in the real world for long-running applications. We explore this question in two main settings: private advertising and anonymous communication. In doing so, we consider constraints each application may have in practice and what adversarial model is realistic for the context in which the application will be deployed. For real world applications, achieving perfect privacy — especially against a worst case adversary — can be impossible. That is, perfect privacy, while achievable in theory, may in practice require assumptions that conflict with usability, deployability, or utility requirements. This presents a challenge as privacy-preserving technologies can, necessarily, only provide privacy for the people who use them. Because of this, designing around user experience is critical, even if doing so requires compromises in the theoretical degree of privacy a system can provide or the strength of adversaries considered in its threat model. In the space of private advertising, we first propose a novel protocol, AdVeil, that eliminates leakage of user data beyond that revealed by the input/output of the ads ecosystem as a whole. We then provide a minimal modeling of the functionality of digital advertising which we use to prove that, even for systems like AdVeil with minimal leakage, the advertising metrics released at the end of the protocol are sufficient to leak information about end users to advertisers when combined with their audience targeting criteria. In the space of anonymous communication, we propose ShorTor, a new routing protocol for Tor that utilizes techniques popular with content distribution networks (CDNs) to reduce latency while maintaining Tor’s existing anonymity guarantees. We evaluate this protocol using a dataset of over 400,000 latency measurements we collected between the 1,000 most popular Tor relays.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164592</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Operationalizing Reliable Machine Learning: From Data Collection to Model Presentation</title>
<link>https://hdl.handle.net/1721.1/164591</link>
<description>Operationalizing Reliable Machine Learning: From Data Collection to Model Presentation
Balagopalan, Aparna
Automated systems driven by machine learning (ML) have made exciting progress across a spectrum of applications. Despite such progress, encoded biases and other failure modes may create barriers to the real-world utility and reliability of such systems. For example, nonrandom data missingness, biased algorithmic optimization objectives, or model presentation strategies that incorrectly impact user trust can all cause models to fail in practice. In this thesis, guided by such observations and prior work on pipeline-awareness in machine learning, we aim to operationalize reliable ML. Under this goal, we propose a framework consisting of the following three components: responsible data collection, robust algorithm development, and fair model presentation. We first conduct two case studies to advance responsible data collection. We investigate whether standard procedures for acquiring data can be repurposed when training models to mimic human judgments about norm violations. We also demonstrate patterns of delayed demographic data reporting within a longitudinal healthcare dataset and show that timevarying missingness due to such delays can distort disparity assessments. Second, we introduce two novel algorithms to improve reliability: a method that leverages representations from vision-language models to filter noisy training data, and a method to produce fair rankings that account for properties of search queries. Finally, since the presentation design of predictions impacts trust in model consumers, we propose metrics to quantify the fairness of post-hoc explainability techniques. Thus, with this thesis, we re-evaluate measurements throughout the machine learning pipeline and contribute to the broader goal of reliable machine learning.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164591</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anti-phage defense as a driver of molecular innovation</title>
<link>https://hdl.handle.net/1721.1/164590</link>
<description>Anti-phage defense as a driver of molecular innovation
Doering, Christopher Ross
Bacteriophages, or phages for short, pose a near-constant threat to the bacteria they infect. Billions of years of conflict has been a catalyzing force for the creation of bacterial defense systems and corresponding phage evasion strategies. To counter phage predation, bacteria have developed a vast diversity of enzyme chemistries and molecular sensing mechanisms whose study has produced new biotechnological tools and insights into our own immune systems. In this work, I have investigated anti-phage defense mechanisms at multiple scales using a combination of genetic, biochemical, and bioinformatic approaches. First, I characterized the mechanism of action of the anti-phage defense system CmdTAC, a toxin-antitoxin-chaperone system that recognizes a viral structural protein to activate a novel mRNA ADP-ribosyltransferase, thereby halting infection. Next, I examined the diversity and distribution of anti-phage mechanisms encoded by E. coli lysogenic phages – phages capable of integrating into and lying dormant within their bacterial hosts. This analysis uncovered overlooked classes of lysogenic phages harboring novel candidate defense systems, including one newly validated system with no detectable homology to previously known mechanisms. Together, this work broadens our understanding of bacterial immune systems, expands the pool of known enzyme chemistries, and highlights areas where continued study can reveal additional mechanisms of anti-phage defense.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164590</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping Function Through Space: The Role of Spatial Organization in Microbial Communities</title>
<link>https://hdl.handle.net/1721.1/164589</link>
<description>Shaping Function Through Space: The Role of Spatial Organization in Microbial Communities
Toneatti Vercelli, Gabriel
Spatial organization plays a critical role in microbial community function, influencing how cells exchange metabolites, coordinate behavior, and compete for resources. This thesis investigates the consequences of spatial structure in natural microbial systems and introduces a novel method to engineer these systems with high precision and scalability. First, we examine the colonization of chitin particles by marine bacteria, a model for particulate organic matter degradation. Using high-throughput phenotyping of natural isolates, we show that vitamin cross-feeding is essential for successful colonization of chitin-particles by many auxotrophic strains. We then model two distinct vitamin cross-feeding mechanisms: lysis and secretion. Using a resource-explicit modeling approach, we leverage metabolic-flux and physiological measurements to predict the colonization success of auxotrophic cross-feeders in this spatially structured environment. Second, we introduce a new chemical method for engineering microbial cell surfaces that enables covalent attachment of molecules such as enzymes and DNA strands to the cell surface. We show that this surface functionalization procedure leads to the acquisition of new phenotypes like antibiotic resistance and programmable adhesion. Altogether, this work reinforces the importance of spatial organization for microbial community function and introduces a new technique to harness this community feature and turn it into a design principle for synthetic microbial systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164589</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture</title>
<link>https://hdl.handle.net/1721.1/164588</link>
<description>LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture
Cao, Biru
This thesis presents LumiModeling, a real-time visualization tool based on Gaussian Splatting (GS) that simulates the dynamic interplay between materiality and lighting in architectural environments. While conventional design workflows rely on geometric modeling and photorealistic rendering, they often abstract complex material behaviors and fall short in capturing light-material interactions. In contrast, GS enables the reconstruction of high-fidelity 3D models from 2D image sets, representing viewdependent effects such as reflection, transparency, and surface roughness. A comparative analysis using real-world data from the MIT Stata Center and the Met Warehouse demonstrates GS’s advantages over mesh-based photogrammetry, particularly in rendering reflective and transparent materials. This work extends existing GS capabilities by implementing a relightable pipeline based on the existing model Relightable3DGaussian (Gao et al., 2023), in which each Gaussian point is augmented with physical parameters, including BRDF, surface normals, and incident lighting. The Stata Center dataset is used to test the relighting of GS. A user study involving architecture professionals reveals that perceptual focus shifts from geometry to materiality and lighting as visual realism increases. The findings highlight the potential of relightable GS in architectural visualization and anticipate its integration into future design workflows.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164588</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation</title>
<link>https://hdl.handle.net/1721.1/164587</link>
<description>Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation
Kupershmidt, Adi
Urban planners face significant challenges in systematically and quantitatively evaluating past planning practices, stemming, among other reasons, from the scarcity of accessible structured data. The period from a plan’s initiation to implementation can span generations; recorded data from the planning processes are often deemed obsolete for addressing present concerns by the time of post-occupancy evaluation. This research examines whether generative AI can help bridge this gap and under what conditions - highlighting both challenges and opportunities - by introducing a system that responsively transforms qualitative zoning data into structured, queryable formats to support the quantitative analysis of planning practices. &#13;
A database of ~150 approved semi-structured urban plans under Tel Aviv municipality’s local jurisdiction supports this project's case study. The system relies on proprietary LLMs (ChatGPT, Claude), streamlining a natural language query input through 3 agentic tasks: (1) RAG (Retrieval Augmented Generation) based querying, generating free-text answers from all plans, (2) structuring the answers to a valid JSON, and (3) visualizing structured data. Key findings indicate an 85.45% precision of the system, as evaluated through an end-to-end assessment of 11 representative queries, each validated against 40 manually labeled plans. The tool provides actionable insights, enabling queries such as trends in sheltered bicycle parking approvals or the status of affordable housing planning over the past decade.&#13;
This research underlines the significance of flexibly structuring non- and semi-structured data for urban science. It addresses the growing gap between static legacy data collection and real-time policymaking, democratizing access to planning information and fostering informed decision-making practices. Integrating cutting-edge AI-driven tools contributes to the current discourse on AI applications for city management and planning by providing a replicable model for more cities and planning datasets to build upon and improve.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164587</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Methods for Enhanced Measurement of DNA Single-Strand Breaks and Somatic Variants</title>
<link>https://hdl.handle.net/1721.1/164586</link>
<description>Developing Methods for Enhanced Measurement of DNA Single-Strand Breaks and Somatic Variants
Elacqua, Juniper J.
Maintenance and repair of DNA are essential for proper cellular functioning and preventing the emergence of disease states. As cells divide, mutations accumulate in the genome which contributes to aging phenotypes and can result in genetic diseases such as cancer. The rate at which a cell develops mutations can be accelerated through exposure to genotoxic agents that introduce lesions which, if left unrepaired, prevent accurate replication of the genome. As such, it is crucial to understand the ways in which DNA becomes damaged, how cells respond to various types of damage, and how this damage contributes to mutagenesis and the development of genetic disease. These fields of study have been greatly advanced by improvements in DNA sequencing technologies, and here we present two sequencing-based methods that aim to enable deeper study of DNA damage, repair, and mutagenesis. First, we demonstrate DENT-seq, a method that identifies single-strand breaks with single-nucleotide resolution. Single-strand breaks are the most common form of DNA damage, occurring at rates of ~10,000 per cell per day, but have to date been understudied due to lack of an unbiased, high-resolution method for their detection. Second, we improve upon lineage sequencing, a previously reported method that uniquely measures somatic single nucleotide variants in dividing cells to achieve high specificity/sensitivity as well as the ability to temporally resolve variants and to relate sequenced genotypes to optically observed cellular phenotypes. Despite the high-quality data and unique capabilities offered by this method, it has so far been underused due to a need for complex, microfluidic-based cell collection. We demonstrate novel protocols for performing lineage sequencing that enable easy adoption of the method without the need for highly specialized equipment or expertise. In addition, we expand the repertoire of mutations measurable with the technique to include indels and variants that arise specifically in response to a genotoxic treatment. The methods we show can be applied to reveal novel findings regarding the causes and consequences of DNA damage and mutagenesis that underly numerous genetic diseases.Maintenance and repair of DNA are essential for proper cellular functioning and preventing the emergence of disease states. As cells divide, mutations accumulate in the genome which contributes to aging phenotypes and can result in genetic diseases such as cancer. The rate at which a cell develops mutations can be accelerated through exposure to genotoxic agents that introduce lesions which, if left unrepaired, prevent accurate replication of the genome. As such, it is crucial to understand the ways in which DNA becomes damaged, how cells respond to various types of damage, and how this damage contributes to mutagenesis and the development of genetic disease. These fields of study have been greatly advanced by improvements in DNA sequencing technologies, and here we present two sequencing-based methods that aim to enable deeper study of DNA damage, repair, and mutagenesis. First, we demonstrate DENT-seq, a method that identifies single-strand breaks with single-nucleotide resolution. Single-strand breaks are the most common form of DNA damage, occurring at rates of ~10,000 per cell per day, but have to date been understudied due to lack of an unbiased, high-resolution method for their detection. Second, we improve upon lineage sequencing, a previously reported method that uniquely measures somatic single nucleotide variants in dividing cells to achieve high specificity/sensitivity as well as the ability to temporally resolve variants and to relate sequenced genotypes to optically observed cellular phenotypes. Despite the high-quality data and unique capabilities offered by this method, it has so far been underused due to a need for complex, microfluidic-based cell collection. We demonstrate novel protocols for performing lineage sequencing that enable easy adoption of the method without the need for highly specialized equipment or expertise. In addition, we expand the repertoire of mutations measurable with the technique to include indels and variants that arise specifically in response to a genotoxic treatment. The methods we show can be applied to reveal novel findings regarding the causes and consequences of DNA damage and mutagenesis that underly numerous genetic diseases.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164586</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconfigurable and Interference-Tolerant Receivers for Next Generation Wireless Systems</title>
<link>https://hdl.handle.net/1721.1/164585</link>
<description>Reconfigurable and Interference-Tolerant Receivers for Next Generation Wireless Systems
Araei, Soroush
An “all-in-one” radio, programmable across the sub-7 GHz spectrum, offers significant hardware efficiency for 5G systems. However, addressing strong interferers in this wide and congested spectrum remains a major design challenge. N-path filters offer a promising solution for efficiently suppressing interference, thanks to their clock-controlled reconfigurability and excellent linearity against in-band and adjacent-channel blockers. While widely adopted in modern receiver architectures, these switched-capacitor circuits remain inherently vulnerable to blockers at clock harmonics, due to their hard-switching nature. These blockers, common in 5G bands, pose a key bottleneck, delaying the realization of fully integrated multi-band, multi-mode radios. This dissertation introduces fully passive topologies to address this challenge. The first design leverages simultaneous charge sharing and capacitor stacking to implement harmonic rejection filtering. It operates entirely without active circuitry and exhibits exceptionally low loss. A second-generation technique, termed “harmonic reset switching”, builds on this approach by rejecting harmonic blockers directly at the driving point of the N-path filter, achieving superior performance with reduced circuit complexity. As a result, existing reconfigurable receiver topologies can be seamlessly transformed into harmonic blocker–resilient architectures. For example, a taped-out mixer-first receiver adopting this technique achieves a 100× improvement in third-harmonic blocker tolerance compared to state-of-the-art broadband receivers. This dissertation also proposes a reconfigurable receiver for IoT-class radios that is tolerant to both close-in and far-out blockers. A scalable clock bootstrapping technique is introduced to enhance linearity while maintaining both power and cost efficiency. All designs are validated through prototypes fabricated in advanced 22-nm and 45-nm silicon-on-insulator (SOI) technologies. By addressing this long-standing challenge, this work paves the way for fully reconfigurable, interference-resilient radios for 5G and beyond.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164585</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Video as the Language of Embodied Intelligence</title>
<link>https://hdl.handle.net/1721.1/164584</link>
<description>Video as the Language of Embodied Intelligence
Chen, Boyuan
Achieving general-purpose embodied intelligence remains a central challenge in artificial intelligence. While recent efforts have extended Large Language Models (LLMs) to robotics by incorporating additional modalities, these adaptations face critical limitations in perception, grounding, and control. For example, spatial reasoning—a simple yet indispensable capability for robots—reveals one of such shortcoming clearly: multimodal LLMs often fail even basic spatial perception tasks like estimating distances. This thesis begins by examining these failures through SpatialVLM, a system that augments vision-language models with 3D spatial reasoning. Although more effective in spatial estimation, this work reveals a deeper issue: the fundamental expressive limitations of language-only outputs in capturing sensorimotor dynamics. Based on these findings, the thesis advocates for a ground-up methodology for robot foundation models, starting with identifying an appropriate “language” for embodied AI, then architecting models and training regimes accordingly. We investigate video as the foundational language, integrated with model-based planning for decision-making. This new paradigm is instantiated through two core contributions. The first is Diffusion Forcing, a hybrid modeling framework that combines causal next-token prediction with full-sequence diffusion. This approach supports stable, coherent rollouts far beyond the training horizon and allows guided generation for decision-making tasks, bridging predictive modeling and planning. Building on Diffusion Forcing, we introduce the Diffusion Forcing Transformer (DFoT), a natural architectural extension designed for flexible video generation conditioned on variable-length histories. To further support long-horizon world-modeling, we propose History Guidance, a set of techniques that enhance sample fidelity, temporal consistency, and compositional generalization. Together, these methods enable robust modeling of visual dynamics across extended timeframes. Finally, we present a preliminary yet promising video foundation model for zero-shot robot motion planning, highlighting the potential of video as the foundational language of embodied intelligence.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164584</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically Interpretable Representation Learning for Mechanistic Insights into Cancer Immunotherapy Resistance</title>
<link>https://hdl.handle.net/1721.1/164583</link>
<description>Biologically Interpretable Representation Learning for Mechanistic Insights into Cancer Immunotherapy Resistance
Tariq, Ifrah
Resistance to immune checkpoint inhibitors (ICIs) remains a critical barrier to effective cancer therapy, driven by complex, multi-scale interactions that current biomarkers often fail to capture. This dissertation introduces the Biologically Disentangled Variational Autoencoder (BDVAE)—an interpretable deep learning framework designed to uncover mechanistic drivers of ICI resistance through multi-omic data integration. Using RNA-seq and wholeexome sequencing data from 366 patients across melanoma, renal cell, urothelial, and gastric cancers, BDVAE learns low-dimensional latent representations that are both predictive of response and biologically meaningful. The model reveals distinct latent dimensions aligned with immune regulation, tumorintrinsic signaling, metabolism, and neuroimmune interactions. SHAP-based interpretation and pathway analysis highlight key resistance-associated programs, including immunosuppressive cytokine signaling, metabolic signaling, and neuroactive pathways such as calcium and cAMP signaling. Unsupervised clustering identifies three tumor subtypes—responder-dominant, non-responder-dominant, and an intermediate group—suggesting plastic or transitional immune states. Survival analyses confirm the clinical relevance of these clusters and expose heterogeneity within standard RECIST categories. Overall, this work presents a novel, interpretable framework for modeling ICI response, offering insights into resistance mechanisms and actionable paths for biomarker discovery, patient stratification, and therapeutic innovation in precision immuno-oncology.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164583</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric interpretations of structural demand for the analysis and reduction of design complexity</title>
<link>https://hdl.handle.net/1721.1/164582</link>
<description>Geometric interpretations of structural demand for the analysis and reduction of design complexity
Lee, Keith Janghyun
This dissertation presents a computational framework to effectively interpret the distribution of structural demand that emerges from the design of large-scale structural systems, and develops methods for its quantification and manipulation. Structural demand is the required strength and geometry of individual building components that emerges from design as a result of global geometry, topology, and loading. Existing metrics of structural performance fail to consider how variations in demand at the component level can lead to designs that are theoretically efficient but difficult to construct. This has led to a rejection of low-carbon, high-performance design solutions in practice, or the need for extensive post-hoc rationalization, both under the presumption of untenable design complexity for conventional building practices. This dissertation argues that an explicit consideration of the distribution of induced structural demand can bridge this gap between design intent and construction feasibility.&#13;
&#13;
To achieve this, structural demand is interpreted as sets of geometric objects in n-dimensional feature spaces, where each dimension represents an independent component of demand, such as area, length, or stiffness. By directly visualizing the spatial distribution of demand, designers are presented with a richer context of non-physical structural design information, and can evaluate how decisions in structural form affect this distribution. Further, spatial interpretations of information allow for spatial metrics of similarity and variation to be defined, from which quantitative measures of design complexity are derived that account for the shape and distribution of demand. This framework, named \emph{Demand Space Analysis}, is explored in depth and applied to a range of structural scales, from the demand of truss elements and their connections, to the relationship between demand and fixed sets of capacity. Advancements in structural optimization are also presented, enabling more efficient and direct minimization of modern structural performance metrics, from which the relationship between design performance and demand complexity can be explored. Through case studies in each chapter, this dissertation demonstrates how geometric analysis of structural demand information can inform the designer of the implications of decisions on the perceived complexity of design, and provides tools for its quantification and reduction.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164582</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence</title>
<link>https://hdl.handle.net/1721.1/164581</link>
<description>Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence
Shen, ChenAn
This thesis examines the behavioral impacts of New York City’s congestion pricing policy on weekday peak-hour travel into the pricing zone. Using a two-stage Bayesian Multinomial Logit framework applied to monthly aggregate mobility data, the study disentangles underlying preference shifts from observed mode share changes in response to the toll. Stage 1 estimates population-level travel sensitivities to cost and time, while Stage 2 uses a hierarchical structure to capture heterogeneity across demographic segments defined by income, age, and gender. The analysis spans January–June 2025 and compares results to the same months in 2024 as a counterfactual scenario without pricing. Findings show that while the policy generated a sustained mode shift away from private automobiles toward public transit, preference adaptation varied by demographic group and evolved over time. Some cohorts reinforced the intended policy effects through reduced transit travel time sensitivity, while others exhibited partial reversal as cost sensitivity shifted. These dynamic patterns underscore the importance of evaluating both immediate and evolving behavioral responses when designing congestion pricing strategies and highlight the value of aggregate behavioral modeling for timely, data-driven policy assessment.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164581</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Inhabited Arctic: Architecture, Time, and the Making of the Past in the Bering Strait (1760–1980)</title>
<link>https://hdl.handle.net/1721.1/164580</link>
<description>The Inhabited Arctic: Architecture, Time, and the Making of the Past in the Bering Strait (1760–1980)
Springstubb, Phoebe
Our view of antiquity is not objective. From the eighteenth century on, the same actors and institutions involved in colonizing the Arctic shaped understandings of its deep past. Commercial whalers erected outposts on the Arctic Ocean’s edges; miners stripped tundra; trading companies raised forts. The demands of these projects complicated the Western imperial fiction of an Arctic without a past. Grappling with Arctic terrain, foreigners were confronted by a landscape inhabited not only by people and animals but by time and temporal imaginations that long preceded European colonization. They encountered contemporary Indigenous settlements coexisting with ancestral houses, fossil animals, the ruins of earlier colonial ventures, and ancient routes of exchange. This dissertation, centered on the Bering Sea and its adjacent geographies of eastern Siberia and Arctic North America, tells the story of how imperial upheaval and the rooting of colonial projects in the ground sparked a deliberate historiographic project to write the Arctic’s deep past. At the heart of this project was a conflict of different cultural views of time. Who had the right to narrate history in these northernmost borderlands? In episodes spanning two centuries, from the Russian empire’s claim to the Bering Sea to the rise of modern decolonial movements, this dissertation traces the central role of diverse Native architectures and technologies. Iñupiaq houses built from great whale skeletons, Unangax watercraft hewn from circulating driftwood, and Chukchi ice cellars carved into permafrost were both prisms for temporal explanations and sites driving change. Russian colonial administrators, British geologists, US ethnographers, Orthodox priests, and Soviet engineers co-opted them to the lineal, geological, eschatological, and paleolithic time that scaffolded imperial projects. Simultaneously, these material practices were vital sites for reinvention and identity, where Native nations built futures out of rupture. Illuminating how the ecological and epistemic limits to empire-building spurred new theories of Arctic time, this project shows history-making to be a crucial tool different states adopted to justify and naturalize their possessions of Native lands. At stake was not static historical truth but how politically situated temporalities structured their present-day actions. The ethical dimensions of deep time, imagined from the Bering Strait’s modern lands and seas, empowered empire’s practical work. How the past was conceived in different intellectual traditions informed whether animals and plants were exploitable resources or ancestors giving their bodies to architecture. This project contends that how people understood themselves as being in time was a decisive fulcrum ordering collective beliefs in what was owed to a larger, nonhuman world. Taking time as an analytical lens, this dissertation identifies repeated efforts to cleave the Arctic’s human history from nature’s past. Used to justify a wide range of colonial hierarchies and violence in the long nineteenth century, it underlies a contemporary bias toward seeing the Arctic as a region of deep naturalism. Viewed as a place where an “extreme” climate dominates manifold other historicities, the past so circumscribed continues to shape future possibilities.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164580</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model</title>
<link>https://hdl.handle.net/1721.1/164579</link>
<description>Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model
Gamble IV, James Monroe
This paper examines how asset limits in means-tested welfare programs shape household saving behavior. I exploit cross-state variation in Temporary Assistance for Needy Families (TANF) asset limits by linking these limits to individual-level data from the Survey of Income and Program Participation (SIPP) and estimating ordinary least squares (OLS) regressions with state and year fixed effects. I find that a $1 increase in the liquid asset limit corresponds to a $0.75 decrease in non-housing wealth among single mothers without a high school diploma. This suggests that less stringent asset tests reduce incentives to save, consistent with models in which more generous public insurance lowers the need for precautionary saving.&#13;
&#13;
To interpret these findings, I develop a dynamic life-cycle model of saving under income and medical expense risk, calibrated to key moments from the Hubbard, Skinner, and Zeldes framework. The model embeds Medicaid-style transfer rules and a guaranteed consumption floor. Simulations indicate that a $7,000 consumption floor can reduce median assets by up to 20% among low-education households, reflecting a decrease in self-insurance as public support increases. I then extend the model to include Achieving a Better Life Experience (ABLE) accounts, which are tax-advantaged savings vehicles for individuals with disabilities exempt from means testing. Simulations indicate that ABLE eligibility increases early-life consumption by approximately $10,000 and reduces retirement savings, with account holders shifting more spending into their working years. Together, these results yield a direct mapping from policy levers, including asset-limit generosity, earnings disregards, childcare subsidies, and ABLE exemption rules, to predicted shifts in median household assets. This offers policymakers a practical tool to balance public insurance and private precautionary savings.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164579</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building the 3D Genome from the Ground Up: Local Interactions Give Rise to Global Order</title>
<link>https://hdl.handle.net/1721.1/164578</link>
<description>Building the 3D Genome from the Ground Up: Local Interactions Give Rise to Global Order
Athreya, Advait
The three-dimensional organization of the genome within the nucleus plays a central role in determining gene regulation and establishing cellular identity, but the mechanisms by which local molecular interactions give rise to global chromatin architecture remain an active area of study. Interactions between nucleosomes—modulated by histone tail post-translational modifications, histone sequence variants, and the DNA sequence itself—are thought to be a major driver of this emergent structure. In this thesis, I address the question of how these intrinsic physicochemical properties of nucleosomes drive the formation of large-scale structures such as chromatin compartments. I develop a theoretical framework based on Flory-Huggins solution theory to derive pairwise internucleosome contact energies from the results of condense-seq, a novel experimental technique that measures the phase separation likelihood of native nucleosomes. I then use these derived energies to parameterize coarse-grained molecular dynamics simulations of chromatin at various resolutions, ranging from 25kb segments to simulate an entire chromosome, down to individual nucleosomes to simulate up to 10Mb genomic regions. These simulations demonstrate that the intrinsic nucleosome properties alone can capture a significant degree of A/B compartment formation observed in Hi-C experiments, despite the deliberate exclusion of all other factors such as loop extrusion and transcriptionfactor-mediated phenomena. This finding establishes that local nucleosome properties play a fundamental role in genome organization. To capture more detailed chromatin physics, I develop an extended chromatin force-field that incorporates anisotropic nucleosome stacking interactions and linker DNA properties using a novel approach for simulating reversible bond formation in molecular dynamics. This model reveals how nucleosome stacking strength, linker DNA geometry, and torsional stress collectively influence higher-order structures. Early results show that the linker-length-dependent DNA torsion contributes to nematic ordering of chromatin, consistent with experimental studies. Future development of this model will enable probing of discrete domain formation observed in imaging studies. Finally, I address a critical consideration for researchers in the chromatin organization field when analyzing Hi-C results. I compare two software tools — cooltools and dcHiC — highlighting the importance of careful parameter selection and analytical choices in designing workflows to ensure reproducible research. Taken together, this work establishes a quantitative, bottom-up modeling framework that directly links the local physicochemical properties of nucleosomes to the global principles governing three-dimensional genome organization. It provides a complementary approach to more data-driven top-down models that have made significant inroads but are challenging to interpret mechanistically. With further development, the work presented in this thesis will contribute towards predicting the structural consequences of specific epigenetic modifications and move us closer to understanding the molecular grammar of chromatin and its role in cellular function and disease.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164578</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Subannual Variability in the Abyssal Ocean</title>
<link>https://hdl.handle.net/1721.1/164577</link>
<description>On Subannual Variability in the Abyssal Ocean
Chen, Si Yuan
The abyssal ocean is a critical yet understudied component of the climate system and is of growing economic interest. This thesis combines field observations and numerical modeling to advance our understanding of subannual variability in the abyssal ocean and its broader implications.&#13;
&#13;
First, hydrographic measurements from the Clarion-Clipperton Zone of the tropical Northeastern Pacific are used to characterize the structure and variability of the bottom mixed layer (BML) in a region targeted for deep-sea mining. The observations reveal a spatially and temporally variable BML with a mean thickness of ~250 m and influenced by interactions with mesoscale eddies and abyssal thermal fronts. A simplified model of sediment transport suggests that such variations in BML structure could significantly influence the dispersal of sediments resuspended by seabed mining activities.&#13;
&#13;
Second, idealized model experiments are conducted to explore the genesis of benthic storms – episodes of strong near-bottom flows and sediment entrainment – underneath an unstable, surface-intensified jet resembling the Gulf Stream east of Cape Hatteras. In these experiments, the baroclinic instability of the jet gives rise to deep cyclonic and anticyclonic eddies through eddy barotropization and to high levels of eddy kinetic energy at abyssal depths through the convergence of vertical eddy pressure fluxes. The near-bottom currents are comparable in magnitude to those observed during benthic storms, with vertical shears strong enough to produce BMLs up to O(100) m thick. Deep cyclonic eddies transport particles from near the bottom over the entire BML and could contribute to benthic nepheloid layers. The results suggest that the abyssal response to the intrinsic instability of surface-intensified currents could contribute significantly to subannual variability near the seafloor.&#13;
&#13;
Third, a model simulation of western North Atlantic circulation is performed to study the deep cyclones (DCs) observed beneath Gulf Stream meander troughs. The characteristics of the simulated DCs compare well with field observations. The negative pressure tendency during cyclogenesis arises from a small imbalance between the sea surface depression and the vertically-integrated increase in seawater density. Vortex stretching is the primary source of cyclonic vorticity, while vortex tilting is a non-negligible sink. The deep pressure tendency, vorticity fluxes, and ageostrophic flows are diagnosed, and their similarities and differences with mid-latitude synoptic cyclones in the atmosphere are discussed. Near-bottom currents in DCs dominate the basin-scale bottom energy dissipation and transport fluid over ≥1000 km horizontally and O(100) m vertically within 3~4 months, suggesting that they provide an efficient mechanism for tracer and material transport in the abyssal interior.&#13;
&#13;
Collectively, this thesis highlights the importance of transient, mesoscale processes in contributing to subannual variability in the abyssal ocean, particularly near the seafloor. The findings have broader relevance for monitoring the environmental impacts of human activities, including deep-sea mining and carbon sequestration. While further questions remain for future investigation, this work underscores the need for sustained in-situ observations in the abyssal ocean and calls for the implementation of high vertical resolution in numerical ocean circulation models.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164577</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Democratizing High-Performance DSL Development with the BuildIt Framework</title>
<link>https://hdl.handle.net/1721.1/164576</link>
<description>Democratizing High-Performance DSL Development with the BuildIt Framework
Brahmakshatriya, Ajay
Modern high-performance software from a variety of domains relies on hand-written and hand-optimized libraries to obtain the best performance. Besides general fine-grained operators that can be composed to write entire applications, these libraries also provide coarser-grained fused and hand-optimized operators that are much faster due to being optimized for a specific sequence of operations. However, as application needs keep growing, library writers are not able to keep up and have to make the tradeoff of either sacrificing performance or generality. Domain-specific languages or DSLs are able to break this tradeoff by automatically generating the best implementation for any arbitrary sequence of operations specified by the end user. However, DSL compilers suffer from a bigger challenge that they require a lot of compiler knowledge to implement parsers, IR, analysis and transformations, and code generation, which is outside the scope of a typical domain expert. To make compiler technology and the benefits of code-generation more accessible to domain experts, I propose the use of multi-stage programming to allow writers to write library-like code while also combining it to generate the most efficient implementation for any whole program. In this thesis, I discuss the design of different multi-stage programming systems, the benefits and drawbacks. Next, I propose Re-Execution Based Multi-Staging (REMS) that addresses a critical flaw in many imperative Multi-Staging systems - the side-effect leak problem. I introduce BuildIt, an implementation of REMS in one of the most popular languages for writing high-performance applications, C++ in a type-based, lightweight way without changing the compiler. I describe the internals of BuildIt and how it implements the key features of REMS. Furthermore, I describe a set of extensions implemented on top of BuildIt that facilitate the development of high-performance DSLs with ease. I show the application of BuildIt to create three DSLs - EasyGraphit, NetBlocks, and BREeze that target graph analytics, ad-hoc network protocol generation, and Regex matching. All these case studies show 10-100x reduction in the amount of effort required to implement these DSLs that perform on-par with or better than state-of-the-art compiler frameworks while targeting diverse architectures like CPUs and GPUS. Finally, I introduce D2X, a system that is designed to add extensible and contextual debugging support to DSL implementations without having to make any changes to off-the-shelf debuggers or mess with complex debugging formats. Next, I show how applying D2X to the BuildIt system greatly improves the debugging experience for all DSLs written with BuildIt.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164576</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Li₄Ti₅O₁₂ Thin Film Carrier Kinetics Through Solid Solution Doping for Battery and Memristor Applications</title>
<link>https://hdl.handle.net/1721.1/164575</link>
<description>Tailoring Li₄Ti₅O₁₂ Thin Film Carrier Kinetics Through Solid Solution Doping for Battery and Memristor Applications
Buzzell, Drew E.
A Lithium titanate, Li₄Ti₅O₁₂ (LTO4), due to its zero-strain behavior during cycling, excellent chemical stability and cyclability, is a promising anode material for solid-state batteries (SSB) applications. As a thin film, its applications expand to integrated circuits, sensors, flexible batteries, IoT devices, and memristors. Across these, precise control of mixed Li⁺ ionic–electronic transport is vital. While dopants have been shown to improve electron conduction and Li⁺ diffusion in LTO4 powders, thin-film studies remain limited. To bridge this gap, we investigate solid solution dopants (Nb⁵⁺, V⁵⁺, Mg²⁺, Cu²⁺) and their effects on LTO4 thin-film kinetics and performance in batteries and memristors. Films doped with Mg, Cu, Nb, and V with a 0.2M dopant concentration were deposited on Nbdoped SrTiO₂ substrates. Cyclic voltammetry and impedance spectroscopy show that Mg, Nb, and V improve kinetic metrics, while Cu reduces diffusivity but boosts electronic conductivity. Through galvanostatic cycling-based capacity, rate capability, and stability measurements, we found that while all dopants displayed enhanced rate performance, the capacity improved only with Mg, Nb, and V. Furthermore, the Mg-doped film was found to have an unstable capacity leading to Nb- and V-doped thin-films as the best overall performing battery anodes. For memristors, current–voltage cycling measurements revealed that low concentrations (0.05 M) of Cu and Nb doped devices presented the largest improvements in cycle-to-cycle stability, switching ON-voltages, ON-OFF current ratios, and lower loss in peak current with increasing scan rate. With increasing dopant concentrations however, devices would see relative drops in performance. In summary, the inclusion of dopants in LTO4 at the right concentration level leads to improvements in both battery and memristor performance allowing for one material multi-functional systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164575</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease</title>
<link>https://hdl.handle.net/1721.1/164574</link>
<description>Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease
Burgos Robles, Emanuel Felipe
The gut microbiome plays a critical role in inflammatory bowel diseases (IBDs), yet current analyses treat bacterial species as functionally uniform, ignoring extensive strain-level diversity that may drive disease mechanisms. Here, we developed a strain-resolved metatranscriptomics framework to investigate how transcriptional activity varies across bacterial lineages and relates to IBD pathogenesis. Using paired metagenomics and metatranscriptomics data from 1,067 fecal samples (103 IBD and 335 non-IBD patients), we first constructed phylogenetic trees for over 250 bacterial species using the single nucleotide variants within essential housekeeping genes, enabling the identification of bacterial strains. Next, we devised a statistical approach to assign mRNA reads to these strains, leveraging the natural genetic variation that is present across them. My analysis revealed that closely related bacterial strains exhibit dramatically different transcriptional programs, with some strains enriched in IBD patients showing upregulation of genes involved in stress response, sugar metabolism pathways, and antimicrobial resistance. Notably, we identified transcriptionally active but genomically low-abundance taxa, highlighting the importance of measuring the transcriptional activities of strains beyond species composition. Lineage-aware differential expression analysis uncovered strain-specific adaptations to inflammatory environments. This strain-resolved approach provides a powerful framework for understanding microbial functional heterogeneity and identifying specific bacterial lineages that could potentially contribute to disease pathogenesis, potentially guiding more targeted microbiome-based therapeutic interventions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164574</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of CH–π Interactions in Protein-Carbohydrate Binding</title>
<link>https://hdl.handle.net/1721.1/164573</link>
<description>The Role of CH–π Interactions in Protein-Carbohydrate Binding
Keys, Allison M.
Protein-carbohydrate binding is essential for biological processes, including cellular recognition and immune signaling. Binding is driven by several types of non-covalent interactions: hydrogen bonding, metal ion coordination, and the less well-understood CH–π interactions. CH–π interactions are pervasive in protein-carbohydrate binding sites and have emerged as critical drivers of protein–carbohydrate recognition; however, the energetics of CH–π stacking interactions, their orientational landscapes, and their interplay with other non-covalent interactions have been unclear. &#13;
In this thesis, I identified carbohydrate-aromatic CH–π stacking interactions from crystallographic structures in the Protein Data Bank. I performed quantum mechanical calculations to quantify interaction energies and found that CH–π stacking interactions can be more favorable than hydrogen bonds. Using atomistic simulations, I also demonstrated that CH–π stacking interactions are necessary for human galectin-3 binding to lactose. To assess the orientational landscape of CH–π stacking interactions, I evaluated the orientations of CH–π stacking interactions formed by β-D-galactose and found that numerous orientations are highly favorable. I then identified carbon atom distances that define an orientational landscape for these interactions. To assess the interplay between non-covalent interactions in protein-carbohydrate binding sites, I used CH–π distance features to bias metadynamics simulations of a curated set of protein–β-D-galactoside complexes. From these simulations, I found that while bound carbohydrates sample many CH–π stacking orientations, the hydrogen bonds in the protein binding site drive the optimal orientation of each ligand. Longer carbohydrate ligands with more hydrogen bonding constraints have more specific orientational dependence, while ligands in binding sites with a reduced number of hydrogen bonds occupy a broader range of orientations. Unlike hydrogen bonds, CH–π stacking interactions confer orientational flexibility: enzymes can exploit multiple CH–π stacking interactions to facilitate the translocation of polysaccharide substrates. Extending this analysis to other carbohydrates, I showed that carbohydrate stereochemistry drives the orientational preferences of CH–π stacking interactions; however, there is also a tradeoff between the presence of hydrogen bonds to charged amino acids and the CH–π interaction strength for each carbohydrate. Overall, this thesis demonstrates that CH–π interactions are favorable and confer high orientational flexibility and that hydrogen bonds act in concert with CH–π interactions to stabilize protein-carbohydrate binding. Tuning the number and positions of these interactions through protein engineering should alter protein selectivity and ligand movement in protein binding sites.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164573</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Networking using Waveguide Quantum Electrodynamics</title>
<link>https://hdl.handle.net/1721.1/164572</link>
<description>Quantum Networking using Waveguide Quantum Electrodynamics
Almanakly, Aziza
The architectural principle of modularity enables the construction of complex systems from simpler components, each responsible for a particular function. The quantum computer is an intricate system comprising fragile, error-prone parts known as qubits. Entanglement distribution across a network of non-local processing modules facilitates robust and extensible quantum computation. In modular quantum architectures, photons are natural quantum information carriers which propagate through interconnects between processing nodes. In this thesis, we engineer a quantum interconnect between superconducting modules underpinned by the physics of waveguide Quantum Electrodynamics (wQED). First, we realize a multi-qubit module that exploits quantum interference to emit microwave photons into a waveguide with a specified propagation direction. Next, we construct the quantum interconnect by coupling two modules to a common waveguide and demonstrate directional (chiral) photon emission and absorption. Finally, using this chiral quantum interconnect, we generate remote entanglement, establishing a key resource for distributed quantum computation in an all-to-all network architecture.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164572</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment</title>
<link>https://hdl.handle.net/1721.1/164571</link>
<description>Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment
Xu, Bangjie
This thesis presents an innovative methodology using Large Language Model-based methods to extract and quantify housing regulations from municipal zoning codes, making possible the most comprehensive examination of regulatory costs at the municipal level across California to date. A multi-staged extraction framework is devised that delivers 85-95% accuracy in the identification and standardization of complex regulatory requirements from legal documents. Applying this methodology to over twenty California cities over the period 2015-2025, it is estimated that regulatory constraints raise the cost of developing a housing unit by roughly between 5% to 10% (or $50,000 and $100,000+) per housing unit, with the most acute constraints in the state’s coastal metros. This method is used to find that factors such as regulation costs limit housing supply elasticity from 1.24 in low-regulation jurisdictions to 0.08 in high-regulation areas. The LLM-based framework allows us to conduct analyses at an unprecedented scale and granularity and to reveal, for example, that the relaxation of regulation by streamlining policies like the Los Angeles Transit Oriented Communities program boosts housing production in eligible zoned areas by 43%. This study makes significant contributions to the restructuring of California’s housing regulation system in response to the affordability crisis, and its methodology presents a replicable tool for regulatory analysis in other policy domains.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164571</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Macro-Finance</title>
<link>https://hdl.handle.net/1721.1/164570</link>
<description>Essays in Macro-Finance
Batista, Quentin
In Chapter 1 (joint with J.R. Scott), we revisit the high-frequency and narrative approaches to estimating the effects of monetary policy shocks. We find that state-of-the-art estimates using both approaches are biased: high-frequency estimates due to nonlinear predictability and narrative estimates due to regularization. To correct for the bias in these approaches, we propose a new estimation procedure called LP-DML that combines ideas from double/debiased machine learning with the local projections framework. We find that LP-DML results in significantly smaller effects of monetary policy on macroeconomic outcomes. In Chapter 2 (joint with Taisuke Nakata and Takeki Sunakawa), we study the following question: how a central bank credibly implement a ”lower-for-longer” strategy? To answer this question, we analyze a series of optimal sustainable policy problems—indexed by the duration of reputational loss—in a sticky-price model with an effective lower bound (ELB) constraint on nominal interest rates. We find that, even when the central bank lacks commitment, the central bank can still credibly keep the policy rate at the ELB for an extended period though not as extended as under the optimal commitment policy—and meaningfully mitigate the adverse effects of the ELB constraint on economic activity. In Chapter 3, I examine the impact of central bank real estate purchases on financial markets, focusing on the Bank of Japan’s (BoJ) intervention in the Real Estate Investment Trust (REIT) market. Using a regression discontinuity design that exploits a discontinuity in the BoJ’s policy rule, I find that a typical intervention — amounting to about 0.014% of market capitalization — leads to an increase of 0.1% to 0.2% of REIT prices in the hours following the intervention. However, at longer horizons, the interventions do not have a significant effect on REIT prices. These findings suggest that the BoJ did not achieve the program’s intended objective of significantly reducing the risk premium on real estate assets.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164570</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward an Integrative Study of Human-AI Interaction</title>
<link>https://hdl.handle.net/1721.1/164569</link>
<description>Toward an Integrative Study of Human-AI Interaction
Alsobay, Mohammed
As artificial intelligence (AI) systems are increasingly embedded in the workflows of individuals and groups, designers and researchers of human-AI interaction (HAI) navigate a vast design space of possible configurations, making decisions that span algorithmic parameters, interface choice, and interaction protocols. This thesis develops an integrative approach that examines how design factors combine and interact to determine the outcomes of human-AI collaboration. &#13;
&#13;
Chapter 1 synthesizes prior HAI research into a coherent design space framework encompassing algorithms, interfaces, users, and task settings, motivating a research program for systematic exploration of interdependencies between these factors. Chapters 2 and 3 turn to group-AI interaction through large-scale behavioral experiments. Chapter 2 investigates how social information---both direct conversation and peer behavior indicators---affects individual reliance on algorithmic decision support. The study reveals that while social information modulates the effects of performance feedback and model explanations on reliance, it does not improve predictive accuracy, illuminating critical tensions between social mechanisms and system design. Chapter 3 examines large language models as facilitators of group deliberation in hidden profile tasks. While LLM facilitation increased information sharing volume, density, and breadth, it did not improve decision quality, highlighting fundamental challenges in group-AI system design beyond information aggregation.&#13;
&#13;
Chapter 4 advances an integrative approach to HAI research, emphasizing shared design spaces, systematic exploration strategies, and predictive models that generalize across contexts. The chapter provides methodological guidance and a tractable roadmap for advancing this integrative research agenda, laying the foundation for a more context-aware science of human-AI collaboration.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164569</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging design to build with less: Evaluating the embodied carbon reduction potential of architectural design across scales</title>
<link>https://hdl.handle.net/1721.1/164568</link>
<description>Leveraging design to build with less: Evaluating the embodied carbon reduction potential of architectural design across scales
Feickert, Kiley
Reducing embodied carbon (EC) in structural systems -- the most significant contributor to EC in a building -- is urgent to address the simultaneous need to reduce global warming and increase urban density. Much of the policy and research to date to reduce EC has focused on material-scale interventions or substitutions. However, EC depends on both: 1) the carbon intensity of the processes used to manufacture construction materials, and 2) the volume of raw materials required. Architects have significant agency to reduce the volume of structural materials in a building (and the resulting emissions) since the required quantity depends on design decisions architects make, including column spacing, structural typology, massing, etc. To date, most methods used to estimate EC during early-stage design do not: 1) integrate with architects’ existing design workflows, 2) evaluate multiple material systems simultaneously, and/or 3) include structural analysis to estimate material quantities. This functionality is critical so that designers can understand which decisions EC is sensitive to and evaluate design and EC tradeoffs before significant carbon is locked in.&#13;
&#13;
To address this problem, this dissertation presents a method towards transparent estimation of structural material quantities, intending to inform architectural design and policy, or other emerging EC standards. This method is used to contribute an analysis of the effectiveness of emerging U.S. EC policies, which focus on different scales of intervention, at the building scale. These policies are evaluated in isolation and in combination with strategic design levers that take advantage of structural mechanics to reduce material quantities for various building configurations and material systems. It finds that the most prominent policy approach, “Buy Clean” materials, only reduces EC by ~9% and ~16% for steel and concrete systems, respectively, compared to strategic design choices that have the potential to yield savings of up to ~79%. This dissertation also identifies building massing as a key lever in the EC outcomes of structural systems and proposes a method to quantify the impact of massing using automated structural design and analysis. It finds that in some situations, cantilevered massing typologies can be materialized for no carbon penalty if efficient configurations are used. Indeed, if inefficient configurations are used, they can incur a significant carbon penalty (2.4x) compared to normative massing. The presented results highlight the potential of design to reduce demand-side EC across scales.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164568</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable Robot Manipulation through Unified Perception, Policy Learning, and Planning</title>
<link>https://hdl.handle.net/1721.1/164567</link>
<description>Generalizable Robot Manipulation through Unified Perception, Policy Learning, and Planning
Fang, Xiaolin
Advancing robotic manipulation to achieve generalization across diverse goals, environments, and embodiments is a critical challenge in robotics research. While the availability of data and large-scale training has brought exciting progress in robotics manipulation, current methods often struggle with generalizing to unseen, unstructured environments and solving long-horizon tasks. In this thesis, I will present my work in robot learning and planning that enables multi-step manipulation in partially observable environments, towards general-purpose embodied agents. Specifically, I will talk about my work in 1) constructing a modular framework that estimates affordances with learned perception models with task-and-motion-planning (TAMP) for object rearrangement in unstructured scenes, 2) learning generative diffusion models of robot skills, which can be composed to solve unseen combination of environmental constraints through infeference-time optimization, 3) leveraging large vision-language models (VLMs) in building task-oriented visual abstractions, allowing skills to generalize across different environments with only 5 to 10 demonstrations. Together, these approaches contribute to the generality and scalability of embodied agents towards solving real-world manipulation in unstructured environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164567</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards High-Dimensional Generalization in Neural Networks</title>
<link>https://hdl.handle.net/1721.1/164566</link>
<description>Towards High-Dimensional Generalization in Neural Networks
Boopathy, Akhilan
Neural networks excel in a wide range of applications due to their ability to generalize beyond training data. However, their performance degrades on high-dimensional tasks without large-scale data, a challenge known as the curse of dimensionality. This thesis addresses this limitation by pursuing three key objectives aimed at understanding and improving neural network generalization. 1. We aim to investigate the scaling laws underlying generalization in neural networks including double descent, a phenomenon in which as a model’s capacity or training data is increased, the test error temporarily increases at a certain point before continuing to decrease. In particular, we will have two goals: 1) a better understanding of when double descent can and cannot be empirically observed and 2) a better understanding of scaling laws with respect to training time. 2. Inductive bias refers to the set of assumptions a learning algorithm makes to predict outputs on inputs it has not encountered. We propose quantifying the amount of inductive bias required for a model to generalize well with a fixed amount of training data. By developing methods to measure inductive bias, we can assess how much information model designers need to incorporate into neural networks to improve their generalizability. This quantification can guide the design of harder tasks that better test a model’s generalization. 3. Finally, we aim to develop new methods to enhance neural network generalization, particularly focusing on reducing the exponential number of training samples required for high-dimensional tasks. This involves creating algorithms and architectures that can learn effectively from limited data by incorporating stronger inductive biases. In particular, we will focus on two inductive biases in particular: 1) learning features of the training loss landscape correlated with generalization and 2) using modular neural network architectures. We expect that these techniques can improve generalization, particularly in high-dimensional tasks. Together, these contributions aim to deepen our theoretical understanding and develop practical tools for enabling neural networks to generalize effectively from limited data.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164566</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Control: Art, Technology, and the Politics of Distance (1966-1972)</title>
<link>https://hdl.handle.net/1721.1/164565</link>
<description>Remote Control: Art, Technology, and the Politics of Distance (1966-1972)
Wexelblatt, Nina Rrose
Platforms carrying dancers across a stage, doors sliding open as if by magic, and simultaneous Happenings in Berlin and Buenos Aires: remote control promised thrills as postwar artists experimented with technologies of distance. Focused on the half-decade between 1966 and 1972, this thesis intervenes in the history of art and technology to argue that a desire to activate the supposedly empty space between artist, art object, and audience effected a new fixation on the nature of that distanced interval, leading artists to incorporate actual remote control technologies into their work. This impulse grew from an unorthodox reading of the work of modernist painters, particularly Jackson Pollock. Where a generation of critics had canonized “presentness” and medium specificity, a younger cohort read the work differently, finding in it permission to embrace remoteness, intermedia experimentation, and political messaging. &#13;
&#13;
Artists including Robert Rauschenberg, Allan Kaprow, Marta Minujín, Wolf Vostell, and Carolee Schneemann, among others, undertook radical experiments with remote systems, often in collaboration with engineers. Theirs was not a technocratically neutral position; this thesis demonstrates that these artists consciously cast the “remoteness” enabled by new technologies as a charged concept, just as controlled distance emerged to define military and industrial relations on domestic, urban, and geopolitical scales. Remote control enabled artists to incorporate, not reject, the expanding frames of reference taking place outside of the sanctioned spaces of the art studio or gallery, from automation to satellite communications to warfare. Artists’ uses of remote technologies intentionally surfaced questions about critical power relations, tying the stakes of their work to debates about the future of U.S. social and economic control and development. In doing so, it also crystallized a newly diffuse, participatory artistic subject: the controller.&#13;
&#13;
The introduction theorizes “remote control” in historical and historiographic context. A second chapter follows Automation House (1970-1972), a Manhattan art space that combined labor mediation and media art to experiment with the American postindustrial labor economy to come. A third chapter centers on Three Country Happening (1966), which took place in New York, Buenos Aires, and Berlin, supposedly mediated by satellite—foiled by the uneven development of the Cold War-era satellite system itself. A fourth chapter delves into Snows (1967), a multimedia performance in protest of the war in Vietnam, which incorporated audience-controlled feedback sensors. A concluding discussion traces the ongoing nature of remote control as it implicates artists and audiences alike in a network of shared responsibility.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164565</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston</title>
<link>https://hdl.handle.net/1721.1/164564</link>
<description>Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston
Murphy, Ryan
Boston is in the midst of a severe housing crisis, driven by decades of underproduction, rising construction costs, restrictive zoning, and an inelastic real estate market that has resulted in persistent affordability challenges. This thesis explores the untapped potential of city-owned land as a powerful tool to increase housing supply and affordability in Boston. Using Boston’s 2022 Citywide Land Audit and detailed development assumptions, the analysis estimates that between 19,000 and 31,000 new housing units could be constructed across city-controlled parcels, including between 3,200 and 6,100 affordable units under the current Inclusionary Development Policy. The research draws on case studies from peer cities such as Chicago and Atlanta where municipal land has been successfully leveraged through transparent disposition processes, fast-tracked entitlements, and flexible affordability models. It argues for a policy shift in Boston toward a more streamlined, market-aware, and scalable land release strategy that prioritizes speed, cross-subsidization, and financial feasibility. Key recommendations include expanding the Welcome Home, Boston program to include mixed-income and rental housing, implementing predictable RFP cycles, offering tax abatements, and expediting the entitlement process.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164564</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Zipping for Transformable and Dynamic Systems</title>
<link>https://hdl.handle.net/1721.1/164563</link>
<description>Modular Zipping for Transformable and Dynamic Systems
Hagemann, Niklas
There is a need for products, machines and environments that can change shape, transform and evolve according to their use. This thesis proposes the design of a simple, modular actuator based on reversible folding and interlocking (zipping) of flexible 3D printed strips. The proposed zipper design allows for continuous control states between a compact and fully deployed state. The modular actuators can be integrated into a variety of systems to enable compact, shape- and stiffness-changing structures, robots and other devices. Designs are presented for single- and double-zipper modules using the same basic zipper design. The modules can be used as modular components of compact robotic systems with the ability to expand and contract according to their environment, or used as adjustable structural components to create deployable, shape-and stiffness-changing objects. The zipper design points the way towards simplified mono-material components that embed transformation and reversibility into everyday devices, products and spaces, and enabling objects that are as easy to transform, reconfigure and reverse as they are to manufacture.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164563</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embodied Representation of Time in Virtual Reality</title>
<link>https://hdl.handle.net/1721.1/164562</link>
<description>Embodied Representation of Time in Virtual Reality
Kim, Suwan
Recent advancements in 3D graphics and AI-assisted generative techniques have accelerated the creation of realistic scenes for immersive technologies, including virtual reality, yet most systems continue to encode time as a linear parameter, relying on timeline-based playback. Mesh-based representations are typically constrained by fixed topologies and rely on predefined animations, which limit their capacity to encode temporal change as a spatial or perceptual phenomenon. In reality, human experience of time is embodied and dynamic, perceived through interaction and memory. Existing digital systems fail to capture this dimension, reducing time to a passive parameter. This thesis proposes a framework for representing time as an embodied and spatial dimension within virtual reality by embedding it directly into the geometry and interaction logic of point cloud data. The system consists of three parts: (1) processing 2D images into layered volumetric point clouds to enable structural fluidity and temporally responsive spatial form; (2) enabling perceptual and spatial modulation in response to user distance and contact, with color influencing the character of change and opacity shaping its perceptual reveal at both global and local scales; and (3) enabling real-time visualization of modulated point cloud through a custom pipeline optimized for mobile virtual reality. By embedding temporal dynamics directly into geometry and interaction logic, this thesis contributes a novel representational approach to spatiotemporal modeling in immersive systems. By doing so, we create new opportunities for architectural visualization, interactive simulations, game design, and reimagining how we perceive and construct digital spaces.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164562</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Behavioral Economics and Sophisticated Procrastination</title>
<link>https://hdl.handle.net/1721.1/164561</link>
<description>Essays on Behavioral Economics and Sophisticated Procrastination
Chen, Xi
Procrastination is a widespread yet complex behavior that resists simple explanation. This dissertation integrates theoretical modeling with experimental evidence to examine procrastination through the lens of sophisticated decision-making. It reframes procrastination not merely as a deviation from rationality, but as a behavior shaped by strategic trade-offs, self-awareness, and individual heterogeneity. The first essay develops a theoretical model of Perfectionistic Procrastination, proposing that individuals with high internal standards may delay tasks not as a simple lapse in self-control, but as a strategic response to the anticipated costs of sustained effort. In this framework, deadlines act as external constraints that help perfectionists limit open-ended striving and bring tasks to completion. An accompanying experiment tests the model’s prediction and finds that perfectionists are more likely to prefer deadlines. These results suggest that, in some cases, procrastination may reflect a structured strategy rather than a purely irrational failure of self-control. The second essay explores the phenomenon of Sophisticated Procrastination, challenging traditional models that attribute procrastination to naïveté. Instead, it proposes that even individuals who are aware of their tendency to delay may struggle to act on that awareness. Two experimental studies using a menu-choice framework examine how people choose task timings. In Study 1, participants preferred earlier deadlines when flexibility was available but shifted toward later options when required to commit, revealing a gap between intention and action. Study 2 identified diverse patterns of deadline preferences: while many participants actively avoided the latest possible deadline, their hesitation to commit to any specific deadline suggests a deeper tension rooted in uncertainty or discomfort with commitment. These findings provide early empirical support for Sophisticated Procrastination, indicating that self-awareness alone may not be sufficient to overcome procrastination. The third essay introduces the idea of Prosocial Procrastination, describing the tendency to delay tasks that benefit others, such as charitable activities, more than those with self-interested outcomes. Using two distinct experimental designs, one based on conjoint analysis and the other on single-attribute choice, the studies show that individuals are more likely to prefer longer deadlines when working for a charity than when working for themselves. These findings offer suggestive evidence for Prosocial Procrastination and contribute to the growing literature on the intersection of social preferences and time preferences.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164561</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants</title>
<link>https://hdl.handle.net/1721.1/164560</link>
<description>Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants
Yao, Randol H.
Valuable knowledge developed in one part of the world may remain “trapped" locally due to frictions in how knowledge is recognized and shared globally. This paper examines how granting US patents to foreign-origin inventions—by elevating their visibility and credibility— untraps the knowledge and facilitates global diffusion. Using examiner leniency as an instrument, complemented by a difference-in-differences design, I find that US grants of home country patents significantly increase both the likelihood and intensity of forward citations, including marked increases from third countries. A novel measure of “trappedness” reveals that knowledge from historically more trapped countries and sectors sees larger diffusion benefits after the US grants. These findings highlight the central role of the US as a platform of global knowledge recognition and diffusion, particularly in turning overlooked ideas into globally relevant innovations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164560</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon</title>
<link>https://hdl.handle.net/1721.1/164559</link>
<description>Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon
Rafferty, Lieutenant Commander Keefe
Submarine canyons have a proven and direct influence on continental shelf circulation and flow dynamics, especially in relation to western boundary currents. There are two key circulation features northeast of Taiwan on the East China Sea continental shelf: (1) the cold dome, a cyclonic feature that appears primarily in summer and is associated with upwelling, and (2) Kuroshio intrusions onto the continental shelf in the vicinity of Mien-Hua Canyon. This paper is a descriptive physical oceanography study with a focus on characterizing the circulation patterns northeast of Taiwan surrounding Mien-Hua Canyon, closely correlating these patterns with the migration of the Kuroshio and its variability and intrusions onto the southern East China Sea continental shelf, leading to the formation of the cold dome. The Institute of Oceanography at the National Taiwan University and WHOI executed a joint international field survey at Mien-Hua Canyon aiming to improve the understanding of canyon flow dynamics between the East China Sea continental shelf northeast of Taiwan and the Kuroshio as the North Pacific Gyre westward boundary current. This joint oceanographic expedition expands on previous joint US/Taiwan physical oceanographic and ocean acoustic studies in the China Seas dating back to ASIAEX in the South China Sea during 2000-2001 and QPE in the East China Sea during 2008-2009. The strengthening and weakening of Kuroshio transport and intensity northeast of Taiwan is closely correlated to the timescales of mesoscale westward propagating eddies arriving to the East Taiwan Channel. When a canyon has a Rossby number ~1 or Rossby radius equivalent to the width of the canyon in a region of left-bounded flow, induced cyclonic flow will experience an upwelling regime within the canyon system with dominant upwelling located at the downstream canyon rim vertically constrained by Rossby Height. Observational analysis of canyon bottom-moored ADCPs and vertical temperature arrays supports previous theory on submarine canyon dynamics on a continental shelf. Satellite sea surface temperature and absolute dynamic topography observations render the formation of a cold dome northeast of Taiwan coincident with this joint oceanographic survey.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164559</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities</title>
<link>https://hdl.handle.net/1721.1/164558</link>
<description>From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities
Roh, Soohyun
Pay differences between organizations are a key source of wage inequality. I propose a novel account of these differences by starting from the consumers that these businesses serve. Firms that serve high-income consumers specialize jobs into higher-paying and higher-skilled positions focused on quality, while those that serve lower-income consumers emphasize cost minimization by requiring workers to perform a wider range of general tasks. Matching consumer foot traffic data and establishment-level wage records, I find that establishments serving higher-income consumers pay their workers more. This effect holds comparing among establishments in the same neighborhoods and industries. Longitudinally, establishments increase wages when they shift toward higher-income customers. Analysis of online job postings further reveals that jobs at higher-income-serving firms involve a narrower set of tasks that command higher market value. These findings show how consumer markets shape firms’ internal job design and contribute to pay inequality across organizations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164558</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.</title>
<link>https://hdl.handle.net/1721.1/164557</link>
<description>The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.
Mulcahy, Robby L.
The United States federal government is the largest property owner in the country, with more than 370 million square feet of real estate under its control. Much of this portfolio is outdated, underutilized, and located in the urban cores of American cities. Nowhere is this more evident—or more consequential—than in Washington, D.C., where the federal government controls approximately 27% of the office market. As federal agencies adopt hybrid work models, and as the operational needs of government evolve, the existing real estate footprint has become increasingly inefficient, expensive, and misaligned with civic and market realities. This thesis investigates the opportunity to rethink federal land ownership and management as a catalyst for urban regeneration, civic stewardship, and housing production.&#13;
&#13;
Using the James V. Forrestal Building as a focal case study, the research examines the historical, policy, and spatial dynamics that have led to the current moment of reckoning. Located on Independence Avenue SW, straddling 10th Street between the National Mall and the Wharf, Forrestal is emblematic of the postwar federal design ethos: monumental, inward-facing, and hostile to street life. Once a symbol of bureaucratic permanence, the building now stands as a physical and symbolic barrier to urban connectivity and civic vitality. The case of Forrestal is used to explore broader questions: How can the federal government dispose of surplus property more effectively? What policy tools exist—or are needed—to unlock value and enable redevelopment? And what role should cities play in shaping the outcomes of federal land disposition?&#13;
&#13;
The thesis employs a mixed-methods approach that includes policy analysis, stakeholder interviews, precedent case studies, and spatial analysis of Southwest D.C. The work identifies a range of obstacles to effective disposition, including Title V of the McKinney-Vento Homeless Assistance Act, opaque OMB budget scoring rules, jurisdictional fragmentation, and the absence of a coordinating authority across federal agencies. It also identifies key lessons from successful projects such as The Yards, Walter Reed, and the Volpe Center, where thoughtful structuring and strong federal-local partnerships enabled transformative redevelopment of surplus land.&#13;
&#13;
The thesis concludes with ten detailed recommendations for reform, including reauthorization of the Federal Assets Sale and Transfer Act (FASTA), modernization of Title V and OMB scoring, the creation of Federal Redevelopment Zones, and the prioritization of housing, civic infrastructure, and design quality in disposition strategy. It argues that the federal government must shift from a passive landlord to an active steward of public land—one that collaborates with cities, integrates public benefit, and reflects democratic values through the built environment.&#13;
&#13;
In this moment of shifting federal needs, declining office demand, and urban transformation, the question is not whether federal real estate reform is needed—it is whether we will seize the opportunity. The fate of buildings like Forrestal will shape not only the skyline of Washington, D.C., but also the federal government’s legacy in America’s cities for generations to come.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164557</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyst Incentives</title>
<link>https://hdl.handle.net/1721.1/164556</link>
<description>Analyst Incentives
Green, Brice
Analyst forecasts have been shown to reflect substantial behavioral biases and predict a number of macroeconomic phenomena. While we typically treat reported forecasts as statistical expectations, under uncertainty the reported point estimate will be sensitive to the payoff structure facing the forecaster. Using data on careers from LinkedIn, I describe the incentive structures faced by analysts, shedding light the extent to which pay and career success are tied to performance. Further, I extend a causal estimator to identify credible counterfactual forecasts and provide tentative causal evidence of the relationship between forecast errors and promotions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164556</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?</title>
<link>https://hdl.handle.net/1721.1/164555</link>
<description>Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?
Chomik-Morales, Jessica
This longterm narrative investigates the life and work of Dr. Eugenio Vargas-Peña, a neuropsychiatrist in Asunción, Paraguay who built a fully functional lab in his countryside home. Vargas-Peña conducts brain research independently, guided by decades of self-study, clinical practice, and an unwavering belief in the value of curiosity-driven inquiry. The piece interweaves historical context, character study, and personal narrative, using the author's own background in neuroscience and science communication to frame an inquiry into legitimacy, recognition, and alternative pathways in science. It asks: What defines a scientist today? Who gets to decide which ideas are taken seriously? And what are the consequences-creative or catastrophic-of working outside institutional boundaries? Through the lens of one man's eccentric yet earnest intellectual journey, this thesis invites broader reflection on the pressures shaping contemporary research and the enduring romance of unorthodox scholarship.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164555</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Systems for Unsupervised Time Series Anomaly&#13;
Detection</title>
<link>https://hdl.handle.net/1721.1/164554</link>
<description>Machine Learning Systems for Unsupervised Time Series Anomaly&#13;
Detection
Alnegheimish, Sarah
Modern assets – from launched satellites to electric vehicles – output dense, multivariate time series data that must be monitored for deviations from “normal” behavior. This monitoring task is referred to as time series anomaly detection. The current state of the industry still depends on fixed or heuristic thresholds that often drown operators in false alarms, and can miss the subtle, context-dependent faults that matter most. This thesis addresses unsupervised time series anomaly detection as an end-to-end problem, asking how we can learn, evaluate, and deploy models that judiciously flag anomalies while remaining intuitive to the end user.&#13;
This thesis provides contributions in the form of both algorithms and systems. First, it introduces three models that enlarge the design space of unsupervised time series anomaly detection: TadGAN, which leverages adversarial reconstruction; AER, which unifies predictive&#13;
and reconstructive objectives in a single hybrid score; and MixedLSTM, which explicitly incorporates interdependencies to improve anomaly detection in multivariate time series. We propose two range-based evaluation metrics that quantify detection quality over temporal intervals. Second, it presents our system Orion, which abstracts anomaly detection pipelines as directed acyclic graphs of reusable primitives, providing user-friendly APIs and enabling interactive visual inspection. Building on this infrastructure, OrionBench performs periodic, fully reproducible benchmarks, producing leaderboards that align research innovations with the needs of end users. Third, the thesis explores a new paradigm – foundation models for unsupervised time series anomaly detection – by formulating SigLLM , which employs large language models and time series foundation models for zero-shot anomaly detection via prompting and forecasting. This paradigm indicates a promising path to developing scalable models for anomaly detection. Finally, beyond evaluating our systems on publicly available datasets, we provide extensive experiments on two industrial case studies that demonstrate improved detection accuracy and practical usability of our system.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164554</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Reliability and Robustness in Integrated Electronic and Photonic Systems</title>
<link>https://hdl.handle.net/1721.1/164553</link>
<description>Techniques for Reliability and Robustness in Integrated Electronic and Photonic Systems
Chakraborty, Uttara
Reliability and robustness are key concerns in the development of novel electronic and photonic materials, devices, and systems. This thesis presents statistical and machine learning techniques for reliability analysis of heterogeneously-integrated systems, extraction of variations from photonic test structure measurements, making smart decisions about test configurations in the face of time and resource constraints, and robust design of photonic components. To estimate reliability model parameters from lifetime datasets where multiple underlying failure mechanisms are present, a differential evolution framework and a boundconstrained expectation maximization algorithm are developed; both these approaches significantly outperform the gradient-based L-BFGS-B algorithm. New schemes for strategic failure analysis on a subset of the failed units are presented, both for detecting the presence of a second failure mechanism and for improving two-mechanism reliability models. A regression-based protocol is also presented for optimally selecting reliability test conditions to verify physical failure mechanism models. A maximum-likelihood-estimation-based approach is demonstrated for the simultaneous extraction of waveguide index and thickness variations using integrated photonic direction couplers and Mach-Zehnder interferometers. Schemes are proposed for optimal selection of cut-back test structures and for propagation loss estimation with a Bayesian prior distribution for fiber-coupling error. Finally, a robust Bayesian optimization algorithm using a new tunable acquisition function is presented for photonic component design. The methods developed in this thesis are expected to be broadly applicable to a wide variety of electronic and photonic devices and systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164553</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and Applications of Large-Area Monolayer Graphene</title>
<link>https://hdl.handle.net/1721.1/164515</link>
<description>Synthesis and Applications of Large-Area Monolayer Graphene
Wang, Zhien (Abigail)
Graphene, renowned for its exceptional electrical, mechanical, and chemical properties, is a promising candidate for next-generation electronics, photonics, and biosensing. However, realizing its full potential depends critically on the ability to synthesize high-quality monolayer graphene. In this thesis, we present a robust chemical vapor deposition (CVD) approach for synthesizing large-area, adlayer-free, single-orientation graphene on Cu(111) foil and Cu(111) film/sapphire. A comparative analysis between these two substrates reveals critical differences in wrinkle density, grain size, and strain — offering insights for optimizing graphene growth.&#13;
We further identify and characterize defective merging behavior in single-orientation graphene domains. Contrary to conventional assumptions, these merging regions contain permeable defects, revealing previously unrecognized limitations in using single-orientation stitched graphene as an impermeable barrier. To scale up production while reducing human error, we also develop an autonomous CVD platform with automated sample handling, growth and post-growth oxidation. This system enables high-throughput and reproducible graphene synthesis with minimal supervision.&#13;
Building on these synthesis advances, we explore multiple applications of large-area monolayer graphene. We discover that graphene can promote interfacial oxidation of metals like aluminum and titanium during deposition, whereas metals such as nickel remain stable — a finding that informs the engineering of metal-graphene interfaces for electronic devices. In parallel, we explored diverse applications of graphene, including its role as a transparent, flexible electrode in organic solar cells, along with several collaborative efforts demonstrating its use as a sensor for cardiac microtissues, and as a tunable microheater in mid-infrared devices.&#13;
Altogether, this work advances both the fundamental understanding and technological scalability of monolayer graphene, positioning it as a versatile platform for future applications across electronics, optoelectronics, and biointerfaces.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164515</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping signaling networks and rapidly evolving genes in the developing Arabidopsis seed at single-nucleus resolution</title>
<link>https://hdl.handle.net/1721.1/164514</link>
<description>Mapping signaling networks and rapidly evolving genes in the developing Arabidopsis seed at single-nucleus resolution
Martin, Caroline A.
Seeds are an exceptional evolutionary innovation that enables the conditional allocation of maternal resources to successfully fertilized ovules. During early development, seeds accumulate nutrients that are utilized either by the embryo or by humans who harvest seed crops for food, biofuel, and livestock feed. Moreover, the grains of maize, rice, and wheat provide approximately 60% of the calories consumed worldwide. Although seeds are a cornerstone for ecosystems and modern agriculture, fundamental aspects of their development are incompletely understood. In this thesis, I develop a transcriptional atlas of seed development using the model plant Arabidopsis thaliana to clarify the functional compartmentalization, diversity, and developmental dynamics of cell types in the seed. I focus my analyses on how seed cell types communicate with one another to ensure successful propagation, and how genetic conflicts in the seed may drive rapid evolution in specific cell types. After characterizing the extent of short, secreted peptide expression in specific seed cell types, I perform in silico screens to match potential peptide hormones with their receptors. In total, I show that the seed coat shows functional compartmentalization around the gateway for maternal resources into seeds, that seed genes differentially expressed in a maternal resource transfer structure are rapidly evolving, and that genes underlying brassinosteroid biosynthesis and response are expressed in adjacent tissues, among other findings. This thesis illuminates potentially new mechanisms for inter-tissue coordination and provides a transcriptional reference for future seed studies.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164514</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization in Deep Learning: Structured, Realistic and Interpretable Learning for Decision-Making</title>
<link>https://hdl.handle.net/1721.1/164513</link>
<description>Optimization in Deep Learning: Structured, Realistic and Interpretable Learning for Decision-Making
Tsiourvas, Asterios
In recent years, deep learning has emerged as a powerful tool for data-driven decisionmaking. However, its adoption in high-stakes applications is often constrained by challenges related to interpretability, fairness, and generalization in structured or complex environments. This thesis develops new optimization methodologies to enhance the realism, structureawareness, and interpretability of deep learning models in decision-making tasks. We begin, in Chapter 2, by addressing the challenge of optimizing trained neural networks for data-driven decision-making. Although neural networks can encode rich representations of preferences or outcomes, directly optimizing their outputs can be computationally intractable and often may produce unrealistic prescriptions. We introduce scalable algorithms that leverage the piecewise-linear structure of ReLU networks, reducing the original hard-to-solve mixed-integer program into tractable linear programs. To ensure realism, we introduce constraints that restrict decisions to lie on the data manifold. We then extend this framework to any differentiable neural network or MIP-expressible model and show that it scales for networks with millions of parameters. In Chapter 3, we focus on decision-making under observational data. First, we study personalized treatment recommendations under discrete treatments. We introduce the Prescriptive ReLU (P-ReLU) network, a piecewise-linear model that partitions the input space into polyhedral regions, assigning treatments uniformly within each, and that can be translated into an equivalent interpretable decision tree. We demonstrate that P-ReLU achieves strong prescriptive accuracy and accommodates structural/prescriptive constraints with ease. Next, we consider the problem of large language model (LLM) routing, where a query must be dynamically routed to the best model under competing metrics like accuracy and cost. We develop a causal, end-to-end approach that learns routing policies directly from logged observational data, minimizing directly decision-making regret. Finally, we tackle the problem of generating realistic, manifold-aligned counterfactual explanations. To address this problem, we present a MIP formulation where we explicitly enforce manifold alignment by reformulating the highly nonlinear Local Outlier Factor (LOF) metric as a set of mixed-integer constraints. To address the computational challenge, we leverage the geometry of the network and propose an efficient decomposition scheme that reduces the initial hard-to-solve problem into a series of significantly smaller, easier-to-solve problems. We further extend this framework to any differentiable neural network or MIP-expressible machine learning model. In Chapter 4, we focus on structured machine learning. We first address the problem of hierarchical time series forecasting, where predictions must be both accurate and consistent with the aggregation structure of the hierarchy. While prior methods rely on fixed projection matrices, we propose learning the optimal oblique projection directly from data. The proposed end-to-end approach jointly trains the forecasting model and projection layer, significantly improving accuracy and coherence. Next, we study the problem of creating a highly expressive, interpretable, and fair machine learning model. We propose Neural-Informed Decision Trees (NIDTs), a model that combines the predictive power of neural networks with the inherent interpretability of decision trees. NIDTs use axis-aligned splits on dataset features to form transparent decision paths, and at each leaf, apply a linear predictor based on both the original features and neural embeddings from a task-specific network to capture non-linearities. To generate NIDTs, we develop a decomposition training scheme that supports direct integration of fairness constraints via a constrained convex optimization problem solved at each leaf. Finally, in Chapter 5, we address fairness and efficiency in emergency department (ED) operations, where prolonged length of stay (LOS) has been linked to adverse outcomes such as increased mortality and higher risk of hospital-acquired infections. We focus on the patient prioritization and placement aspects of ED operations to improve throughput and reduce wait times. We propose a novel MIP predictive-prescriptive framework that decomposes predicted LOS into actionable components, enabling a more granular and operationally meaningful model of ED dynamics. Fairness considerations are explicitly incorporated into the formulation. To address uncertainty, we introduce a sampling-based solution approach. Our method increases ED throughput by 50–100% and reduces average wait time by 50–75%, depending on current utilization levels, while achieving near-optimal performance compared to a clairvoyant oracle. This work was conducted in collaboration with a major U.S. academic medical center. To facilitate practical implementation, we also design an interpretable metamodel that approximates the predictive-prescriptive algorithm with high fidelity. Together, these contributions provide a unified perspective on deep learning for reliable decision-making, grounded in optimization and encompassing interpretability, structure-awareness, and causal reasoning, well-suited for high-stakes operational environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164513</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reverberation Mapping of Supermassive Black Holes using Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164512</link>
<description>Reverberation Mapping of Supermassive Black Holes using Machine Learning
Lewin, Collin
Accreting supermassive black holes at the centers of galaxies, known as active galactic nuclei (AGN), offer a unique window into the physics of accretion and feedback that shape galactic evolution. Yet, the small spatial scales of these regions remain inaccessible to direct imaging. Reverberation mapping circumvents this limitation by using time delays between correlated emission at different wavelengths to infer physical size scales. While X-ray reverberation probes the innermost accretion flow, continuum reverberation in the UV, optical, and infrared (UVOIR) traces reprocessing by the accretion disk and broad-line region (BLR). In this thesis, I develop and apply frequency-domain timing techniques based on Gaussian Process (GP) regression to study AGN reverberation across X-ray and UVOIR regimes. By modeling the empirical variability of AGN light curves with GPs, I interpolate onto an evenly sampled time grid, enabling robust estimation of Fourier-resolved time lags despite irregular sampling or large time gaps. I apply this method to NuSTAR observations of the Narrow-line Seyfert 1 galaxy Ark 564, introducing a multi-task GP model that jointly learns kernel hyperparameters across light curves. This enables the first simultaneous modeling of lag and flux spectra from both NuSTAR and XMM-Newton using a relativistic reverberation model to constrain black hole mass and disk properties. Recent reverberation campaigns with the Neil Gehrels Swift Observatory and ground-based telescopes have revealed significant discrepancies between observed inter-band lags and standard accretion disk theory. These include unexpectedly large lag amplitudes (the “accretion disk size problem”) and weak correlations between X-ray and UV/optical light curves. To investigate further, I analyze recent Swift campaigns of Mrk 335 and Mrk 817 using GP-based frequency-resolved lag analysis. In both sources, standard disk lags appear only on short timescales (high frequencies), while longer-than-expected lags dominate at low frequencies. These lag excesses are consistent with reprocessing at larger radii, similar to the BLR. Mrk 817 offers a rare opportunity to connect the inner and outer accretion flow: I detect the first simultaneous measurement of X-ray and UVOIR lags, effectively mapping the full disk. These lags vary significantly over the campaign, with longer delays during periods of stronger X-ray obscuration. This suggests that a disk wind may modulate the observed lags by introducing additional reprocessing and/or blocking ionizing flux from reaching more-distant material. To test this obscuration effect across a population, I conduct the first statistical study of UV/optical lag excess versus physical parameters across the Swift campaigns. The results show that the lag excess is driven entirely by obscured AGN, while the lags of unobscured sources are, on average, consistent with thin-disk theory. Regression analysis reveals that X-ray column density explains over 80% of the variance in lag excess. As for the X-ray/UV connection, obscured AGN also tend to show weaker correlations and more variable lags, suggesting that line-of-sight absorption not only contributes additional reprocessed emission that extends the UV/optical lags, but may also decouple or delay the X-ray and UV variability. To make GP-based time series analysis accessible to the community, I developed the STELA Toolkit, a fully documented Python package for computing frequency-domain data products using GPs. I also benchmark GP performance against other interpolation methods, including state-of-the-art transformers, paving the way for scalable, ML-enabled timing analysis in the era of time-domain surveys like Vera Rubin.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164512</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Nonlinear Dynamics: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/164511</link>
<description>Learning Nonlinear Dynamics: Methods and Applications
Rossi, Baptiste T.
Accurate modeling of dynamical systems through differential equations is essential for scientific prediction and prescriptive control. Traditional model development, which relies on expert knowledge, parameter fitting and validation, is often iterative, time-consuming, and complicated by real-world data complexities such as noise and missing observations. This thesis addresses these challenges by developing robust, scalable, and interpretable methods for learning nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) directly from data, with a particular emphasis on applications in fluid dynamics.&#13;
&#13;
In Chapter 2, we introduce a novel methodology for learning arbitrary nonlinear ODEs using collocation methods combined with interpolation. This approach demonstrates enhanced robustness to noise and significant computational speed-ups compared to classical system identification techniques, including the popular SINDy framework. It also provides a constructive method for reconstructing unobserved system components, making it applicable to partially observed systems, and offers theoretical guarantees on accuracy traditionally absent in strong-form identification.&#13;
&#13;
In Chapter 3, we combine the approach from Chapter 2 with sparse regression to derive sparse ODEs from data, demonstrating enhanced robustness to observational noise. Our method shows improved performance in recovering the true structures and coefficients on canonical benchmark tests under significant noise, while the performance of traditional surrogate methods deteriorates even with minimal noise.&#13;
&#13;
In Chapter 4, we extend this methodology to Partial Differential Equations (PDEs) using the method of lines, addressing issues related to data scale and interpolation ill-posedness. With a focus on Computational Fluid Dynamics (CFD), we show that our method goes beyond recovering complex nonlinear PDEs, such as the Navier-Stokes equations, from simulation data. The method can also be used as an a-posteriori indicator of simulation quality, providing insights into the effective PDEs represented by a given simulation, and pinpointing error-generating areas to inform adaptive mesh techniques.&#13;
&#13;
Lastly, in Chapter 5, we introduce a novel data-driven framework for modeling turbulent phenomena, a long-standing challenge in aerospace and climate science. Our approach addresses the Reynolds-Averaged Navier-Stokes (RANS) closure problem, which involves modeling the unobserved eddy viscosity field. We tackle two interconnected inverse problems: reconstructing the eddy viscosity from flow data and discovering its governing partial differential equations (PDEs), thereby proposing a new pathway to uncover new or refined RANS closure models directly from high-fidelity simulations. This chapter establishes a tractable baseline using a composite loss function, which we evaluate on canonical turbulent flows. Our results demonstrate that while the approach can recover governing equations when the ground truth eddy viscosity is known, significant challenges remain due to noise and numerical errors. We conclude that a more advanced reconstruction methodology is essential for robustly discovering these models, underscoring the potential of this data-driven approach and identifying critical areas for future research.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164511</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Electronic Compressibility of Rhombohedral Graphene Multilayers</title>
<link>https://hdl.handle.net/1721.1/164510</link>
<description>The Electronic Compressibility of Rhombohedral Graphene Multilayers
Aronson, Samuel H.
In condensed matter systems, energy bands with narrow dispersion frequently host correlated electronic phases that arise from strong Coulomb interactions. When these bands also have concentrated Berry curvature, the correlated phases may be topologically non-trivial. The low-energy bands of rhombohedral graphene multilayers possess both of these ingredients, making this a promising class of materials in which to search for correlated topological electronic ground states. This thesis describes our electronic compressibility measurements on rhombohedral graphene multilayers, with a particular focus on the pentalayer system (R5G). We utilize a planar capacitance technique that probes the thermodynamic density of states and enables us to extract energy gaps of incompressible phases. We observe a variety of correlated electronic phenomena including half and quarter metals, layer antiferromagnetism, correlation-driven Chern insulators, and thermodynamic signatures of potential Wigner crystallization. We also study the electronic compressibility of R5G aligned to a hexagonal boron nitride (hBN) substrate to form a moiré superlattice. Motivated by the recent discovery of the fractional quantum anomalous Hall effect in this system when the electrons are pushed away from the moiré interface by an external electric displacement field, we study the opposite moiré-proximal limit, in which the superlattice potential is considerably stronger. We observe integer and fractional Chern insulator states that persist down to low magnetic fields in addition to numerous trivial and topological charge density waves. We map out a phase diagram that is highly sensitive to both displacement and magnetic fields, establishing the R5G-hBN superlattice as a highly-tunable system for studying the interplay between intrinsic band topology and strong lattice effects.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164510</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hadronic Structure with Classical and Quantum Computing</title>
<link>https://hdl.handle.net/1721.1/164509</link>
<description>Hadronic Structure with Classical and Quantum Computing
Avkhadiev, Artur
Calculations in lattice quantum chromodynamics (QCD) — presently the only known systematically improvable approach to describe the strong nuclear force in the nonperturbative regime from first principles — are playing an increasingly important role in revealing how hadrons emerge from the interactions of the underlying degrees of freedom: quarks and gluons. With computational and theoretical advances, more fruitful connections have emerged between lattice QCD and phenomenology, and the field is now well into a stage ripe for deriving tighter constraints on hadronic structure through joint analyses of numerical lattice QCD results with experimental data.&#13;
 This thesis summarizes lattice QCD calculations of the Collins-Soper (CS) kernel: a nonperturbative function whose inclusion in joint analyses has the potential to advance the study of multidimensional hadronic structure. The CS kernel is an anomalous dimension of transverse-momentum-dependent (TMD) distributions describing a three-dimensional structure of ultrarelativistic hadrons as a function of quark-gluon momenta collinear with and transverse to the hadron's motion. Constraints on the CS kernel at nonperturbative transverse-momentum scales are instrumental to relate TMDs across scales and processes. The kernel differs for quark and gluon TMDs, but is otherwise universal. This thesis presents the first lattice QCD determination of the quark CS kernel with systematic control over operator mixing, quark mass, and lattice discretization, and a proof-of-principle lattice calculation of the gluon CS kernel providing the first nonperturbative constraints on this quantity.&#13;
 Additionally, this thesis summarizes exploratory studies on how Hamiltonian calculations — realized with quantum-computer simulations and tensor networks — may be combined with conventional Monte Carlo calculations based on Lagrangian formulations in Euclidean space. These studies examine how constructions of interpolating operators, used in conventional calculations to map between the vacuum and a ground state of interest, may be optimized in Hamiltonian calculations to increase overlap with the target state. Results, limited to the Schwinger model, support further investigations of this approach in theories more closely resembling QCD as quantum-computing and tensor-network technologies continue to mature.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164509</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution</title>
<link>https://hdl.handle.net/1721.1/164508</link>
<description>Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution
Elsabbagh, Fares
Fast simulation of digital circuits is crucial to build modern chips. Current processors and SoCs integrate hundreds of complex components, including cores, accelerators, and memory hierarchies. Simulating these systems is necessary to verify correctness and explore the design space. Simulation can happen at different levels of abstraction. In this work we focus on Register-Transfer-Level (RTL) simulation. While RTL simulators are frequently used in development due to their quick compilation times, their runtime performance is slow. This is because as the designs are scaled up, multicore communication and scheduling overheads limit performance and scalability.&#13;
&#13;
We present ASH, a parallel architecture tailored to RTL simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. ASH hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs that represent different types of architectures. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164508</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task Scheduling Techniques to Accelerate RTL Simulation</title>
<link>https://hdl.handle.net/1721.1/164507</link>
<description>Task Scheduling Techniques to Accelerate RTL Simulation
Sheikhha, Shabnam
Fast simulation of digital circuits is crucial to build modern chips. Slow simulation lengthens chip design time and makes bugs more frequent. While simulation can happen at different levels of abstraction, Register-Transfer-Level (RTL) simulation is the usual bottleneck in chip design, as it is needed for ongoing debugging and evaluation. Current simulators scale poorly across CPU cores, because they are unable to exploit the fine-grained parallelism inherent in simulation workloads.&#13;
&#13;
We present ASH, a parallel architecture tailored to simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Dataflow execution exposes abundant parallelism, as each task can run as soon as its inputs are available. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. Selective execution introduces dynamic data dependences since skipped tasks do not communicate data. ASH employs speculative execution to handle these dependencies. ASH’s hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware. The key compiler techniques include a novel partitioning for minimizing data communication while maintaining load balance, and a strategic coarsening mechanism to reduce the overheads of fine-grained tasks.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164507</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility</title>
<link>https://hdl.handle.net/1721.1/164506</link>
<description>Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility
Baum, Amelia Rose
Public transit agencies face significant and growing challenges related to workforce shortages, absenteeism, and employee retention, which threaten service reliability. Reports found that 90% of U.S. transit agencies are experiencing a workforce shortage, with 84% claiming that the shortage affects their ability to provide scheduled service. Industry-wide, operator absence is a significant contributor to missed work at transit agencies nationwide and has, in many cases, delayed the full reinstatement of service at transit agencies following the COVID-19 pandemic. The quality of bus operators' work is significantly impacted by inflexible crew scheduling constraints. However, most studies focus on pay, benefits, and infrastructure, neglecting the importance of scheduling. This thesis aims to fill this gap by examining the potential for crew scheduling improvements to enhance the quality of life for bus operators through a three-part case study at the Chicago Transit Authority. Part 1 analyzes the historical work preferences of CTA bus operators, providing actionable insights for scheduling improvements. Part 2 presents a high-fidelity proof of concept in HASTUS, using block schedules (10-hour-a-day runs that are intended to be run by an operator 4 days a week) and rostering to reduce negative work traits, increase consecutive and weekend days off for most operators, while maintaining schedules for the top 20% of senior operators. Part 3 evaluates the new 10-hour, 4-day-per-week packaged schedules via an LLM-based paired alternatives survey of operators at one CTA garage, measuring the desirability of the proof of concept and collecting qualitative feedback. Overall, the new schedules substantially improve the quality of work for operators by guaranteeing at least one weekend day off, at least two consecutive days off, and increasing day-to-day schedule consistency and overnight rest time, while maintaining constant vehicle requirements and total pay hours. The survey results show that 72% of operators at the 74th Street garage support the new schedule paradigm, demonstrating strong support for their potential adoption and encouraging future exploration of a block schedule hybrid rostering paradigm at the CTA and other transit agencies.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164506</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Charcuterie Platter of QCD Matter</title>
<link>https://hdl.handle.net/1721.1/164505</link>
<description>A Charcuterie Platter of QCD Matter
Sun, Zhiquan
One of the greatest current challenges in theoretical high energy physics is to understand the dynamics of Quantum Chromodynamics (QCD). In this thesis, I address a variety of questions in QCD using Effective Field Theory (EFT). The first question deals directly with the observed phenomenology of QCD: How can we use EFT to disentangle the complicated three-dimensional dynamics of how quarks and gluons, the fundamental degrees of freedom of QCD, combine to form the observed bound states in nature called hadrons? I initiate a new formalism using Heavy Quark Effective Theory to study this dynamical process known as hadronization. I shed new light on the transverse momentum-dependent fragmentation process of heavy (charm and bottom) quarks by making use of the fact that heavy quarks with masses much larger than the strong interaction scale decouple from the rest of the hadronization cascade. I also present exciting heavy quark phenomenology at existing colliders and the upcoming Electron-Ion Collider. The second question investigates the field theory structure of QCD: What can we learn about the nonperturbative structure of the quantum field theory through the abstruse emergent phenomenon in QCD called “confinement”, which traps quarks and gluons inside hadrons? I study a class of cleverly constructed observables known as energy correlators by using fieldtheory based methods to determine the leading nonperturbative contribution, and examine the universality of the nonperturbative matrix element that gives rise to this contribution. I also show that including the nonperturbative contribution has a significant impact on the extraction of the strong coupling constant, a fundamental parameter of the Standard Model, using tools such as factorization and resummation from EFT. Last but not least, the final question explores the underlying symmetry properties of QCD and its potential completions: How robust is the axion solution to the strong CP (ChargeParity) problem, and what are some of its implications beyond the realm of QCD? I examine the axion quality problem in post-inflationary QCD axion models with different symmetry properties and identify a new tension with standard cosmology. I further show that the axion string-domain wall dynamics is more complicated than commonly expected, undermining the reliability of a unique mass prediction for axion dark matter in post-inflationary models. I showcase the importance of considering both high-energy extensions and the EFT at low energy, and uncover new complexity of the axion solution to the strong CP problem.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164505</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from pre-pandemic data to design and test future-proof therapeutics</title>
<link>https://hdl.handle.net/1721.1/164504</link>
<description>Learning from pre-pandemic data to design and test future-proof therapeutics
Gurev, Sarah
Effective pandemic preparedness relies on predicting immune-evasive viral mutations to enable early detection of variants of concern and design vaccines and therapeutics that are resilient to future viral evolution. However, current strategies for viral evolution prediction are not available early in a pandemic and have limited predictive power – experimental approaches require host polyclonal antibodies and existing computational methods draw heavily from current strain prevalence. In addition, vaccines and therapeutics have been designed with an eye towards past or circulating variants, not towards future evolution. To address these challenges, we developed EVEscape, a generalizable framework that integrates fitness predictions from a deep generative model of evolutionary sequences with biophysical and structural information. EVEscape quantifies the immune escape potential of viral strains at scale and is applicable before surveillance sequencing, experimental scans, or 3D structures of antibody complexes are available. We demonstrate that EVEscape, trained on sequences available prior to 2020, performs as accurately as high-throughput experimental scans at anticipating pandemic variation for SARS-CoV-2 and is generalizable to other viruses including Influenza A virus, HIV, and understudied viruses with pandemic potential such as Lassa and Nipah. We investigate both alignment-based and protein language models to explore the best model of mutation effects across pandemic-threat viral families. We demonstrate the utility of EVEscape in three critical applications: (1) Surveillance efforts flagging high escape SARS-CoV-2 variants from their first appearance (2) Design of panels of viral antigens that mimic future viral variants for early, proactive evaluation of the future protection of vaccines and therapeutic; and (3) Design of a pan-sarbecovirus nanoparticle-based vaccine capable of eliciting broad, long-lasting protection against sarbecoviruses, including future variants. This three-pronged approach represents a paradigm shift in pandemic preparedness, offering a novel strategy to preemptively address viral families with pandemic potential and significantly bolster global prevention efforts.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164504</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in quantum information theory and quantum&#13;
many-body physics</title>
<link>https://hdl.handle.net/1721.1/164503</link>
<description>Topics in quantum information theory and quantum&#13;
many-body physics
Balasubramanian, Shankar
In this thesis we present two results relating to the intersection between quantum information theory and quantum many-body physics. The first pertains to quantum algorithms, where few computational problems are believed to exhibit exponential separation between quantum and classical performance. For those that are, natural generalizations remain elusive. One speedup that has especially resisted generalization is the use of quantum walks to traverse the welded tree graph, due to Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman. We show how to generalize this to a large class of hierarchical graphs in which the vertices are grouped into “supervertices” that are arranged according to a d-dimensional lattice. Supervertices can have different sizes, and edges between supervertices correspond to random connections between their constituent vertices. The traversal time of quantum walks on these graphs are related to (a) the existence of small subspaces within which the quantum walk evolves and (b) the localization properties of the quantum walk within these subspaces. We find examples of hierarchical graphs that yield provable speedups over classical algorithms ranging from superpolynomial to exponential, depending on the underlying dimension and the random graph model. We also discuss how to relax criterion (a) to the existence of a small and approximate subspace by using techniques from graph sparsification. The second result pertains to fault-tolerant quantum memories. Storing a qubit in a noisy environment is crucial for developing full-scale quantum computers. While constructions of fault-tolerant quantum memories exist, they often assume that quantum operations are not local and assisting classical computation operates instantaneously and noislessly. In particular, constructing a topological quantum memory below four dimensions with local quantum and classical operations that is fault-tolerant under both quantum and classical noise is an open problem. We construct a local quantum memory for the 2D toric code using ideas from the classical cellular automata of Tsirelson and Gács. Our memory preserves a logical state for exponential time in the presence of both classical and quantum noise below a constant noise rate. While our 2D quantum memory is built from operations that depend on space and time, we construct a fault-tolerant quantum memory in 3D using stacks of 2D toric codes that can be built with time-independent operations.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164503</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Matter in the Era of Generalized Symmetries</title>
<link>https://hdl.handle.net/1721.1/164502</link>
<description>Quantum Matter in the Era of Generalized Symmetries
Chatterjee, Arkya
The discovery of generalized symmetries has led to powerful new insights into quantum matter. They have been used to classify new families of quantum phases, place constraints on phases realizable in a given physical system, and conceptually unify seemingly disparate phenomena. In many ways, they prove just as powerful as traditional symmetries at organizing and constraining the theories that describe quantum matter. In this thesis, we attempt a unification of such constraints by developing a holographic correspondence between (generalized) symmetries and topological orders, called the Sym/TO correspondence. For any (finite internal) symmetry of a quantum system in d (spatial) dimensions, we associate with it a unique topological order in d + 1 dimensions, called its Symmetry Topological Order (SymTO). We devise an operator algebraic recipe to compute the SymTO data for any lattice spin model, demonstrating it in a number of examples. We then use the SymTO to classify possible quantum phases allowed by the symmetry—we call this a generalized Landau paradigm. Besides classifying phases, we also identify constraints on the phase transitions between them using a SymTO-resolved modular bootstrap. We test this framework in a quantum spin chain with non-invertible symmetries. We discover a new Kramers-Wannier-like duality and a rich phase diagram including a noninvertible symmetry-enriched incommensurate phase. The translation symmetry of the spin chain has a nontrivial interplay with the lattice Kramers-Wannier duality, which matches the anomaly of the corresponding non-invertible symmetry in the low-energy effective field theory. Finally, we explore such unusual anomaly-matching mechanisms in more detail in the context of the chiral anomaly of a single massless Dirac fermion, demonstrating a novel lattice realization of chiral symmetries and their anomaly.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164502</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Materials Design of Ordered Nanocomposite Assemblies</title>
<link>https://hdl.handle.net/1721.1/164501</link>
<description>Systems Materials Design of Ordered Nanocomposite Assemblies
Thrasher, Carl James
The ability to precisely organize matter across multiple length scales is a central challenge in modern materials science. In this dissertation, I develop a systems materials design approach to engineer hierarchically structured nanocomposite assemblies, integrating molecular recognition, supramolecular chemistry, colloidal assembly, and bulk processing into unified material platforms. At the molecular and nanoscale, I investigate how multivalent supramolecular interactions can be rationally programmed by controlling the architecture of polymer binders grafted to nanoparticle surfaces. Through systematic variations in polymer topology, recognition group density, and scaffold geometry, I demonstrate how polymer design dictates the thermodynamic strength and multivalency of nanoparticle superlattice assembly, enabling precise control of thermal stability,&#13;
crystallographic symmetry, and collective bonding behaviors in massively multivalent systems. Building on these design principles, I develop a colloidal metallurgy framework to process selfassembled nanoparticle superlattices into dense macroscopic polycrystalline solids while preserving nanoscale order. By systematically studying the interplay of temperature, pressure, and time during colloidal sintering, I elucidate mechanisms of densification, defect evolution, and grain growth unique to colloidal systems, establishing processing–structure relationships that parallel but fundamentally diverge from atomic sintering. Finally, I extend these concepts to create stretchable nanocomposite supercrystals, embedding supramolecularly assembled superlattices into elastomeric matrices via co-engineered polymer chemistries that enable hierarchical strain&#13;
transduction. These materials combine the nanoscale precision of supercrystals with mechanical robustness, reconfigurability, and stimuli-responsive optical properties, illustrating a scalable pathway to multifunctional metamaterials. Collectively, this work demonstrates how a systemslevel integration of molecular design, colloidal assembly, and bulk processing enables new&#13;
paradigms for the synthesis of hierarchically ordered, functional nanocomposites.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164501</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The novel roles of BCL6 and BATF3 in regulating human&#13;
CD8⁺ T cell dysfunction</title>
<link>https://hdl.handle.net/1721.1/164500</link>
<description>The novel roles of BCL6 and BATF3 in regulating human&#13;
CD8⁺ T cell dysfunction
Traunbauer, Anna Katharina
Reduced effector function and elevated inhibitory receptor expression are hallmarks of exhausted CD8⁺ T cells, yet the underlying molecular and epigenetic drivers remain incompletely defined. Here, we developed an in vitro repeated stimulation model to recapitulate features of human CD8⁺ T cell dysfunction and delineate transcriptional and epigenetic landscapes. Our analyses revealed that BCL6 and BATF3 are robustly upregulated in dysfunctional CD8⁺ T cells, with ATAC-seq demonstrating enhanced chromatin accessibility at their gene loci. Transcription factor footprinting shows increased BATF3 motif occupancy in chronically stimulated cells and integrative multi-omic analysis combining footprints, open chromatin regions, RNA-seq and ChIP-seq data revealed that putative BATF3 target genes may include master regulators of exhaustion. Moreover, overexpression of BCL6 or BATF3 markedly upregulates TIM-3 expression and suppressed cytokine release, establishing their capacity to induce T cell dysfunction. We further validated these findings ex vivo in antigen-specific CD8⁺ T cells from patients with advanced melanoma, as well as HCV and HIV infections, where cells were enriched for BCL6^high and BATF3^high subsets co-expressing canonical exhaustion markers such as PD-1, TIM-3 and CD39. Notably, Single-cell RNA sequencing of HIV-specific CD8⁺ T cells identified a distinct BCL6^high PD1⁻ progenitor population that gives rise to two distinct subsets via divergent differentiation trajectories: one branch generates effector-like BCL6^high PD1⁺ cells, whereas the other produces BCL6^high PD1⁺ cells that retain an exhaustion gene signature alongside partial memory-like feature. Collectively, these findings identify BCL6 and BATF3 as key mediators of human CD8⁺ T cell dysfunction and illuminate novel transcriptional and epigenetic pathways that may be leveraged for therapeutic intervention in cancer and chronic viral infections.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164500</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aspects of Nonperturbative Heavy Quark Physics</title>
<link>https://hdl.handle.net/1721.1/164499</link>
<description>Aspects of Nonperturbative Heavy Quark Physics
Lin, Joshua
The properties of charm and bottom quarks are an interesting corner of Quantum Chromo-Dynamics (QCD) due to the fact that their masses are much heavier than the typical QCD interaction energy ΛQCD. Due to this scale separation, it is possible to describe these heavy quarks by Effective Field Theories (EFTs) that simplify their equations of motion, make explicit additional symmetries that only appear for heavier quark masses, and simplify the theoretical calculations required for predictions. By discretising these EFTs in a lattice regularisation, nonperturbative calculations of observables of interest become possible. This thesis presents progress towards systematically controlled calculations of two such observables: the Spectator Effect contributions to the inclusive decay rates of b-hadrons, and the real-time dynamics of fermions propagating in a thermal medium. Standard EFT calculations in Lattice-QCD proceed by expressing observables as sums over perturbatively computed Wilson coefficients and nonperturbative matrix elements that can be calculated by path integral monte-carlo methods. Though it is possible to carry out this procedure within a regulator-independent renormalization scheme, in practice almost all such decompositions are computed in the modified minimal subtraction scheme MS which is only defined for the dimensional regulator (DR), due to its simplicity. Computing such observables therefore requires a matching between lattice regularised operators and operators renormalized in MS. In Chapter 2, both the dimensional regulator (DR) and the lattice regulator are reviewed, with a particular emphasis on techniques needed for calculations carried out in later sections. An interesting subtelty in DR is the need to introduce d-dimensional counterparts to the Dirac γ-matrices, which a-priori are only well defined in integer number of dimensions. This analytic continuation is of practical importance as it introduces additional Evanescent Operators (Sec. 2.1.4) that have physical consequences. In Sec. 2.1.5, traces of d-dimensional γ-matrices were related to Tutte polynomial evaluations [4], presenting a new graph-theoretic interpretation of the dimensionally regulated γ-matrices. One strategy of renormalizing lattice-regulated operators into MS involves first renormalizing into a regulator independent scheme, before perturbatively matching between the regulator independent scheme and MS. In Chapter 3, regulator independent position-space (X-space) schemes for renormalizing operators defined in the leading order Heavy Quark Effective Theory (HQET) are proposed [3]. Compared to other regulator independent renormalization schemes such as RI-xMOM, X-space schemes have the benefit that they are gauge invariant. The next to leading order matching calculations between X-space and MS are presented for heavy-light and heavy-light-light multiplicatively renormalizable operators, as well as ∆Q = 0 and ∆Q = 2 four quark operators relevant for heavy hadron decays and mixing, where Q refers to the static quark number. Due to their heavy masses, hadrons containing heavy quarks decay via the weak nuclear force. Experimental measurements of these lifetimes provide precision determinations of the fundamental parameters of the Standard Model. The Heavy Quark Expansion expresses the inclusive lifetimes of heavy hadrons in terms of matrix elements of HQET operators of increasing dimension. The Spectator Effects are contributions due to the ∆Q = 0 four-quark operators, where the light quark degrees of freedom within a heavy hadron participate in the decay. In Chapter 4, a Lattice-QCD determination of the static decay constant f HQET B and the isospin-nonsinglet portion of the Spectator Effect matrix elements for heavy-light mesons is presented. Fits of bare matrix elements were performed for three different lattice spacings, and renormalized with the schemes proposed in Chapter 3 before a continuum limit is taken. Due to the heavy masses mQ of the heavy quarks, it is possible to find temperatures T approximately satisfying a hierarchy ΛQCD ≪ T ≪ mQ. At these temperatures, QCD undergoes a deconfinement transition into the Quark-Gluon-Plasma (QGP) phase where the light degrees of freedom are no longer confined, and instead screen the long-range colour forces. The heavy quarks however are not thermalised, and act as probes of the QGP. Further understanding of the QGP requires first principles simulations of the heavy quark dynamics at finite temperature, however such calculations are difficult due to the enormous size of the Hilbert space. Variational approximations of the Hilbert space encode wavefunctions within a few parameters, and provide a practical method to simulate many particle systems. As a testcase, the variational approach was applied for the first time to simulate fermions at finite temperature in a simple QFT: the 1+1d U(1) gauge theory known as the massive Schwinger model. Both the real-time dynamics of string like states, and the properties of the thermal state were studied, and such variational methods are shown to be promising approaches to the more difficult case of a heavy quark effective theory in QCD.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164499</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the Nonperturbative Physics of QCD with&#13;
Normalizing Flows and a moderate number of Pions</title>
<link>https://hdl.handle.net/1721.1/164498</link>
<description>Probing the Nonperturbative Physics of QCD with&#13;
Normalizing Flows and a moderate number of Pions
Abbott, Ryan William
Quantum Chromodynamics (QCD) is a cornerstone of the standard model of particle physics, and the best known theory of strong nuclear interactions. The only known systematically improvable ab-initio method for accessing the nonperturbative physics of QCD is Lattice QCD is, and this thesis presents two advances in our understanding QCD using lattice-based methods. The first is a calculation using many-pion systems to map out the entire zero temperature, nonzero isospin density region of the QCD phase diagram. The calculation uses novel methods for working with many-pion systems that enables working with thousands of pions, and furthermore provides rigorous constraints on the baryon-dense region of the QCD phase diagram. The second is an application of methods from machine learning (namely normalizing flows) in order to accelerate sampling. This approach has the promise of eliminating issues such as critical slowing down, as well as introducing novel tools and methods that enable methods of calculation that would be possible otherwise.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164498</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limits of QCD</title>
<link>https://hdl.handle.net/1721.1/164497</link>
<description>Limits of QCD
Gao, Anjie
This thesis explores the fundamental kinematic limits of Quantum Chromodynamics (QCD), including the soft, collinear, and Regge limits, using soft-collinear effective theory (SCET). We begin by studying transverse momentum dependent (TMD) physics in semi-inclusive deep inelastic scattering (SIDIS), which probes the small transverse momentum regime arising from the soft and collinear limits of QCD. We derive all-order factorization theorems for azimuthal asymmetries in SIDIS at next-to-leading power (NLP). We also propose a new angular observable, q_∗, for probing TMD dynamics at the future Electron-Ion Collider (EIC), which enables an order-of-magnitude improvement in experimental resolution while retaining sensitivity to TMD distributions. Next, we apply the TMD formalism to a class of observables known as energy correlators. We study the transverse energy-energy correlator (TEEC) in the back-to-back limit, a dijet observable at hadron colliders, and the three-point energy correlator (EEEC) in the coplanar limit, a trijet observable at lepton colliders. For both observables, we derive allorder factorization theorems and resum large logarithms to next-to-next-to-next-to-leading logarithmic (N3LL) accuracy. Finally, we analyze the Regge limit of 2 → 2 QCD amplitudes. By factorizing these amplitudes into collinear jet and soft functions and studying their rapidity evolution, we define Regge-like anomalous dimensions in a gauge-invariant manner. At the level of the exchange of two Glauber gluons in the t-channel, we recover the BFKL equation from a purely collinear perspective. Extending to three-Glauber exchange, we derive the first closed-form renormalization group equations for Regge cut contributions in several nontrivial t-channel color representations, providing a systematic method for organizing non-planar QCD amplitudes at high energy.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164497</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining the Molecular Underpinnings of Iron Homeostasis in Human Cells</title>
<link>https://hdl.handle.net/1721.1/164496</link>
<description>Determining the Molecular Underpinnings of Iron Homeostasis in Human Cells
Lee, April
Precise regulation of nutrient availability is crucial for cellular function and survival. Iron, in particular, is tightly regulated as it serves as an essential cofactor for numerous enzymes but can catalyze the formation of toxic radicals at elevated levels. To maintain the necessary cytoplasmic iron concentration, cells store excess iron in large proteinaceous cages called ferritin and, when available iron levels fall, they degrade these cages, liberating the stored iron for use. This thesis focuses on the molecular mechanisms underlying cellular iron sensing, as well as the molecular interactions supporting regulated ferritin degradation and subsequent iron release. Specifically, this work interrogates the protein interactions involved in ferritinophagy, a form of selective autophagy that leads to the lysosomal degradation of ferritin. Extending prior work that identified key components supporting ferritinophagy, including the selective autophagy receptor protein NCOA4 and its cognate autophagosomal receptor GATE16, experiments described here uncover the molecular contacts between these proteins. I found that NCOA4 bears two short linear motifs that each bind to GATE16 with weak affinity. However, these binding motifs are highly avid and, in concert, support high-affinity binding of NCOA4 to oligomerized GATE16. I further describe that ferritin degradation in cultured human cells relies on the contacts I identified biochemically. Moreover, I found that iron decreases NCOA4’s affinity for GATE16, providing a plausible mechanism for irondependent regulation of ferritinophagy. Taken together, this work suggests a general mechanism by which selective autophagy receptors can distinguish between inactive monomeric GATE16 and the active oligomerized forms that primarily drive autophagy. In related studies, I have biochemically probed the NCOA4•ferritin interface, with these experiments suggesting a novel function of NCOA4 in modulating ferritin cage structure – either through cage dismantling or through the formation of higher order structures. Taken together, these studies further define the molecular mechanisms by which NCOA4 aids cells in maintaining iron homeostasis, and they provide the requisite reagents for future work aimed at building a unified model for how mammalian cells regulate this vital but toxic metal.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164496</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sampling Methods for Fast and Versatile GNN Training</title>
<link>https://hdl.handle.net/1721.1/164495</link>
<description>Sampling Methods for Fast and Versatile GNN Training
Alkhatib, Obada
Graph neural networks (GNNs) have become a commonly used class of machine learning models that achieve state-of-the-art performance in various applications. A prevalent and effective approach for applying GNNs on large datasets involves mini-batch training with sampled neighborhoods. Numerous sampling algorithms have emerged, some tailored for specific GNN applications. In this thesis, I explore ways to improve the efficiency and expressivity of existing and emerging sampling schemes. &#13;
&#13;
First, I explore system solutions to facilitate the development of fast implementations of different sampling methods. I introduce FlexSample, a system for efficiently incorporating custom sampling algorithms into GNN training. FlexSample leverages the types of performance optimizations found in SALIENT, a state-of-the-art system for fast training of GNNs with node-wise sampling. In experiments with 4 GNN models which use layer-wise and subgraph sampling, FlexSample achieves up to 1.3× speed-up for end-to-end training over PyTorch Geometric with the same sampling code. Furthermore, FlexSample extends SALIENT with highly-optimized C++ implementations of FastGCN and LADIES layer-wise sampling, which achieve 2×–5× speed-up over their respective Python implementations.&#13;
&#13;
Second, I introduce a novel framework for learning neighbor sampling distributions as part of GNN training. Key components of this framework, which I name PertinenceSample, are: (i) a differentiable approximation of node-wise sampling for GNNs; and (ii) a parametrization of node sampling distributions as node- or edge-wise weights of attention-like GNN layers. I present an initial exploration of the potential of PertinenceSample for improving node classification accuracy in the presence of noisy edges. Specifically, in two synthetic experiments where roughly half of a node’s neighbors may have similar features but different labels, I demonstrate that extending a GraphSAGE model with a 2-layer perceptron for learning the PertinenceSample weights can improve classification accuracy from 50%–75% to (nearly) 100%.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164495</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Electrocatalysts for the Production and Oxidation of Liquid Fuels</title>
<link>https://hdl.handle.net/1721.1/164494</link>
<description>Designing Electrocatalysts for the Production and Oxidation of Liquid Fuels
Zheng, Daniel J.
With the ever-rising CO₂ levels in the atmosphere, it is paramount to cease reliance on fossil fuels to meet global energy demands. While the cost of electricity from renewable sources, such as solar and wind, continues to decrease and has even fallen below that of fossil fuels since 2014, these renewable energy sources suffer from intermittency, potentially causing shortages at peak demands. Thus, methods to store or economically use excess renewable energy are needed for full decarbonization. One promising avenue is to store the excess generated electrical energy in chemical bonds, creating molecules and materials with industrial or energy storage utility. In this proposed scheme, the renewable electricity would be used to electrochemically convert earth-abundant molecules into value-added chemical or fuels. These generated products could then be utilized as feedstocks in industrial applications or as a fuel source to generate electricity when needed by transforming back into their earth-abundant forms.&#13;
&#13;
Central to transforming earth-abundant molecules into value-added chemicals or fuels is the oxygen evolution reaction (OER), which is found in nearly every process. The plentiful nature of OER’s main reactant, water, and moderate thermodynamic potential of 1.23 V vs. the reversible hydrogen electrode, make OER an ideal reaction to pair with other transformations. However, the slow kinetics of OER significant hinder the efficiency of these processes. As such, discovering new OER catalysts with high activity and stability would have wide-spread impacts. On the other hand, one of the most promising renewable fuel sources is methanol, which boasts about 3 times the energy density of hydrogen and can be used as an alternative to hydrogen in proton exchange membrane fuel cells. However, the sluggish kinetics of the methanol oxidation reaction (MOR), even with current state-of-the-art noble metal catalysts causes direct methanol fuel cells to reach an efficiency of &lt;40%, limiting their practical usage. While significant research has been invested in discovering new MOR electrocatalysts, PtRu has reigned for 5 decades, highlighting the need for a true breakthrough. &#13;
&#13;
In this thesis, electrocatalysts for OER and MOR are examined in depth. For OER, metal-hydroxide organic frameworks (MHOFs), a promising new class of hybrid organic-inorganic materials with potential to mimic the superior functionality of enzymes, are studied. Operando vibrational and absorption spectroscopy methods are used to characterize the degradation mechanisms and lattice oxygen exchange capacity as a function of the linkers. Using such knowledge, defects are engineered into the MHOF that increase both the activity and stability compared to the pristine material. Furthermore, the traditionally reported MOR mechanism is studied using isotope-labeled reactants and operando mass spectrometry. These experiments revealed that, in contradiction to typically accepted mechanisms, the C-O bond in methanol can be cleaved during MOR, with the resulting CO₂ molecule containing two water-derived oxygen atoms, opening a new paradigm for MOR catalyst design. Driven by the need to discover new materials at scale, a fluorescence-based OER catalyst screening method is developed that can screen an entire composition space simultaneously. In addition, an AI-driven, automated platform for screening a high-dimensional multimetallic space for MOR is presented.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164494</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Gap: From Artificial Intelligence and Optimization Theory to Action</title>
<link>https://hdl.handle.net/1721.1/164493</link>
<description>Bridging the Gap: From Artificial Intelligence and Optimization Theory to Action
Petridis, Periklis S.
Despite significant theoretical advances in Operations Research (OR) and Artificial Intelligence (AI), a persistent gap remains between these developments and their practical implementation in real-world settings. Despite significant progress in these fields, many OR and ML approaches struggle to scale to realistic problem sizes, lack robustness to uncertainty, or fail to address implementation constraints faced by practitioners in industry. Through four distinct works conducted in collaboration with industry partners, this research demonstrates how methodological advancements can bridge this theory-practice divide while maintaining rigorous theoretical foundations and guarantees. In the first part, we focus on optimization methodologies that scale traditional OR approaches to handle real-world problem sizes and uncertainty. In Chapter 2, we develop a stochastic Benders decomposition scheme for large-scale network design problems, a class of problems ubiquitous in logistics, transportation, and energy sectors. By incorporating sampling techniques within the decomposition framework, we achieve deterministic optimality guarantees while reducing computational costs, enabling solutions for networks with 700 nodes—an order of magnitude larger than previously tractable instances—while achieving optimality gaps of 5-7% compared to 16-27% for traditional deterministic approaches. In Chapter 3, we present a holistic framework for industrial decarbonization, developed with a major phosphate producer planning to quadruple energy consumption while transitioning to renewable sources. Our robust optimization approach combines strategic capacity expansion planning over a 25-year horizon with adaptive operational models, providing 95% reliability guarantees while balancing solar and wind integration with battery storage to meet a projected 12 TWh annual demand. In the second part, we shift our focus to developing AI systems that address the unique challenges of medical data abstraction and clinical decision support. In Chapter 4, we address the challenge of automating clinical data abstraction from electronic health records, collaborating with the Society of Thoracic Surgeons to populate their Adult Cardiac Surgery Database. Our AI pipeline combines 31 models per target variable with a two-tiered quality control system, achieving over 99% accuracy while automatically extracting 43-50% of registry variables, demonstrating how AI can dramatically reduce manual abstraction burden while maintaining clinical standards. In Chapter 5, we extend this healthcare AI focus by developing xHAIM (Explainable Holistic AI in Medicine), which addresses the limitations of current clinical AI systems in handling extensive patient records, providing interpretability, and incorporating medical knowledge. Through semantic similarity techniques and generative AI, xHAIM improves predictive performance while generating clinically grounded explanations that enhance trust and adoption by healthcare practitioners.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164493</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-species genome-wide CRISPR screens identify conserved suppressors of cold-induced cell death</title>
<link>https://hdl.handle.net/1721.1/164492</link>
<description>Multi-species genome-wide CRISPR screens identify conserved suppressors of cold-induced cell death
Lam, Breanna
During hibernation of Syrian hamsters, the core body temperature shows a remarkable decrease, going from 37°C to 4°C. Although this ability to survive at low temperatures could in principle be due to systemic factors that occur during hibernation, we and others have seen that cells from hibernating rodents cultured in vitro maintain this ability. Although others have studied characteristics of cells from hibernating and non-hibernating organisms, the genes and pathways that are involved in cold-induced cell death have not been systematically explored. &#13;
In this thesis, we conduct two genome-wide CRISPR-Cas9 screens in both a cold-sensitive (K562) and cold-resistant (BHK-21) cell line, and uncover GPX4 and related selenocysteine incorporation genes as important for protection against cold-induced cell death. Using genetic knockdowns, along with overexpression of GPX4, we confirm our findings and demonstrate that levels of GPX4 may be limiting in K562 cells, contributing to their cold sensitivity. Additionally, pharmacological validation using inhibitors of GPX4 reveal that the catalytic activity of GPX4 is dependent on the selenocysteine in the active site. Our findings are extended across multiple cell lines and cell types across six species. Taken together, our results suggest that GPX4 may be a powerful and conserved suppressor of cold-induced cell death. &#13;
Building on our initial findings, we go on to show that cold exposure leads to increases in membrane permeability. This membrane permeability is transient, as rewarming of the cells reduces permeability to baseline levels. We also test the role of lipid peroxidation in contributing to membrane permeability and find that although it contributes in some cell lines, it is not the sole contributor as ferroptosis inhibitors do not fully mitigate membrane permeability. We go on to test different membrane channels and do not see decreases in membrane permeability, potentially indicating pathway-independent effects of temperature on membrane permeability. Altogether, this work provides a foundation for understanding how cold exposure influences mammalian cells.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164492</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why Landfills Endure: Quantifying economic barriers to material and energy recovery from municipal solid waste in the United States</title>
<link>https://hdl.handle.net/1721.1/164491</link>
<description>Why Landfills Endure: Quantifying economic barriers to material and energy recovery from municipal solid waste in the United States
Baidoo, Jacqueline E.
Municipal solid waste (MSW) is a heterogeneous mixture of materials discarded by residential and nonresidential generators at end-of-life processing facilities for treatment and disposal. Conventional treatment methods reduce waste volumes through recycling via material recovery facilities, energy recovery via municipal solid waste incinerators, and biochemical conversion via composting. Even so, nearly 50% of total MSW generated in the United States was sent to landfills for final disposal in 2018 and almost half of all landfills currently in operation are expected to reach capacity by 2050. Waste planners seek to use developing resource recovery technologies like dry anaerobic digestion, gasification, and pyrolysis to narrow the gaps in end-of-life processing. Such technologies are posited to improve materials circularity and advance zero-waste landfill diversion goals by transforming residuals into electricity, fuels, and precursors to chemicals and fertilizers. However, despite demonstrated improvements to technical inefficiencies in waste valorization, numerous projects built on these technologies have failed to break through to commercial success. We investigate the contribution of regional and economic factors to the success of resource recovery projects through the lens of why landfills remain the predominant method of waste disposal. We build cost models of conventional and select developing treatment methods and use discounted cash flow analysis to estimate financial feasibility by local MSW compositions as reported in regional waste characterization studies.&#13;
&#13;
Findings indicate that the most critical factor to sustainable operation is consistent supply of waste materials at the quality and scale that maximize production efficiency, which is not achievable without rigorous data monitoring of MSW composition. Conversely, dependence on waste volume rather than composition makes land disposal a uniquely flexible pathway capable of subsidizing the costs of resource recovery. Progress towards landfill diversion is economically linked to the opportunity cost of avoiding landfill utilization. Unless municipalities are able to introduce subsidies elsewhere in the waste management ecosystem through gate fees and credits, projects will fail where marginal net costs of diversion exceed the revenues lost from avoided landfilling. Targeted processing of organic wastes can facilitate an average diversion of 24% for the compositions surveyed and was found to be viable for composting and dry anaerobic digestion projects at low to negligible financial losses compared to landfilling.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164491</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enlightening Artificial Intelligence with Science</title>
<link>https://hdl.handle.net/1721.1/164490</link>
<description>Enlightening Artificial Intelligence with Science
Liu, Ziming
Today’s artifciail intelligence (AI) systems, while remarkably capable, are largely black boxes. The black-box nature raises concerns for those who build AI – “How can we construct an understand AI in scientifically grounded ways?”, and those who use AI – “How can we trust systems we do not understand?”. This thesis takes a humble step towards addressing the black-box problem. Building white boxes with science (Science for AI): The prevailing paradigm in AI today – “scaling is all you need" – focuses on scaling up existing models. However, this approach often yields systems that are neither interpretable nor efficient. I argue that scientific principles offer fresh perspectives for designing more transparent and effective AI systems. This is demonstrated through Kolmogorov-Arnold Networks (KANs) inspired by mathematics, Poisson Flow Generative Models (PFGM) rooted in physical intuition, and brain-inspired modular training (BIMT) drawing insights from neuroscience, etc. Opening black boxes (Science of AI): Modern AI models exhibit a range of puzzling behaviors – such as grokking, neural scaling laws and emergent representation learning – whose underlying mechanisms remain poorly understood. I employed simplified “spherical cow” models to investigate these phenomena from the perspective of phase transitions. I will show that grokking is a special phase in the hyperparameter space, which can be controlled and eliminated. The learned algorithms after grokking also display distinct phases, called clock or pizza algorithms. AI for Science: With greater interpretability, AI systems can begin to function as “AI Scientists” capable of (re)discovering deep scientific structures from data. These include conservation laws, hidden symmetries, integrable systems, Langrangian and Hamiltonian formulations, modular structures, and high-precion solutions. I believe my research work contributes to the emerging interdiscipinary field that unites AI and Science. Building opon the foundation laid in this thesis, I envision a future in which science guides AI out of its current era of alchemy and into a true era of scientific understanding.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164490</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative modeling of 5' splice site subclass regulation and evolution</title>
<link>https://hdl.handle.net/1721.1/164489</link>
<description>Quantitative modeling of 5' splice site subclass regulation and evolution
Kenny, Connor Jens
Pre-mRNA splicing is an essential molecular process required for eukaryotic gene expression. In this thesis, I present a previously unknown mechanism of splicing regulation where a family of splicing factors, the LUC7 family, compete to differentially impact 5→ splice site (5→ SS) selection in a sequence-dependent manner. I quantitatively characterize two major subclasses of 5→ SS in eukaryotes and outline distinctive features of 5→ SS in exons affected by the three human LUC7 paralogs: LUC7L2 and LUC7L enhance splicing of “right-handed” 5→ SS that exhibit stronger consensus matching on the intron side of the nearly-invariant / GU, while LUC7L3 boosts splicing of “left-handed” 5→ SS with stronger consensus matching upstream of the /GU. Using a range of experimental systems, from human cells to mutant plants, I show that LUC7 paralogs have opposing effects on these two 5→ SS subclasses and that this regulatory mechanism likely originated in the last common ancestor of animals and plants over 1.5 billion years ago. I further evaluate a competing model of 5→ SS subclass regulation involving METTL16- mediated U6 snRNA modification and reconcile both models by devising computational tools that identify sequence features predictive splicing dysregulation in transcriptome-wide datasets. Finally, I examine the evolutionary dynamics of left- and right-handed 5→ SS and propose a model of intron evolution in which codon and intron phase constraints in protein-coding genes shape both minor-to-major intron conversion and transitions between left- and right- 5→ SS subclasses.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164489</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compiler-Hardware Co-Design for Pervasive Parallelization</title>
<link>https://hdl.handle.net/1721.1/164488</link>
<description>Compiler-Hardware Co-Design for Pervasive Parallelization
Ying, Victor A.
Modern computer systems have hundreds of processor cores, so highly parallel programs are critical to achieve high performance. But parallel programming remains difficult on current systems, so many programs are still sequential. This dissertation presents new compilers and hardware architectures that can parallelize complex programs while retaining the simplicity of sequential code. Our new systems allow real-world programs to use hundreds of cores without burdening programmers with concurrency, deadlock, or data races. &#13;
 &#13;
This dissertation follows a novel approach that eliminates the burden of explicit parallel programming to make parallel execution pervasive. This approach relies on four guiding principles. First, exploiting implicit parallelism preserves the simplicity of sequential execution. Second, dividing computation into tiny tasks, as short as tens of instructions each, unlocks plentiful fine-grained parallelism in challenging programs. Hardware-compiler co-design techniques can create many tasks in parallel and reduce per-task overheads to make tiny tasks scale to many cores. Third, new hardware and software mechanisms can compose parallelism across entire programs, removing serializing barriers to overlap executions of nested parallel subroutines. Finally, exploiting static and dynamic information for data locality reduces data movement costs while maintaining load balance on large multicore systems. &#13;
 &#13;
This dissertation presents three systems that embody these four principles. First, T4 introduces automatic program transformations that exploit a novel hardware architecture to parallelize sequential programs. As a result, T4 scales hard-to-parallelize real-world programs to tens of cores, resulting in order-of-magnitude speedups. Second, S5 builds on T4 with novel transformations to remove needless serialization in a broad class of challenging data structures. Thus, S5 scales complex real-world programs to hundreds of cores, delivers additional order-of-magnitude speedups over T4, and outperforms manually parallelized code tuned by experts. Finally, ASH is an accelerator that demonstrates the same approach can be applied with simpler mechanisms tailored for digital circuit simulation. A small ASH implementation is 32x faster than a large multicore CPU running a state-of-the-art parallel simulator.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164488</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting</title>
<link>https://hdl.handle.net/1721.1/164487</link>
<description>Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting
Murzynowski, Philip
Graph neural networks (GNNs) are an important class of methods for leveraging the information present in graph structures to perform various learning tasks. Distributed GNNs can improve the performance of GNN execution by dividing computation among multiple machines and scale to large graphs by partitioning graph features and the graph structure. Although distributed GNNs are able to achieve self-relative speedup, they are often slower than well-optimized code running on a single machine. For example, evaluation of the prevalent Distributed DGL system on graphs in the Open Graph Benchmark shows Distributed DGL can achieve speedup of over 2× when moving from one to four nodes, but execution of Distributed DGL on 4 nodes is 2× slower than a well-optimized GNN system, such as the SALIENT system, on a single machine.&#13;
&#13;
In my thesis, I argue that it is possible for a distributed GNN system to be both fast and scalable. Specifically, I show that it is possible to match the performance of well-optimized, non-distributed codes for GNN training and also achieve good scalability when running in the distributed setting. I present a system called Distributed SALIENT and motivate its design through profiling and identifying bottlenecks that arise in the distributed setting. Key components of Distributed SALIENT include the use of well-optimized code for local computations, pipelining of inter-machine communication, and a careful trade-off between data partitioning and partial replication.&#13;
&#13;
I evaluate Distributed SALIENT on the Open Graph Benchmark (OGB) and show that Distributed SALIENT achieves good speedup compared to SALIENT’s well-optimized single-node code while only using replication factors of roughly 5%. In fact, in experiments with training a 3-layer GraphSAGE model on the large OGB papers100M data set, Distributed SALIENT on 8 nodes is 8.6x faster than SALIENT on 1 node.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164487</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic Insights into Alloy Solidification using&#13;
Machine-Learning Potentials</title>
<link>https://hdl.handle.net/1721.1/164486</link>
<description>Atomistic Insights into Alloy Solidification using&#13;
Machine-Learning Potentials
Cao, Yifan
Alloy solidification is a critical process in materials design and manufacturing, as it governs the formation of microstructures that determines the mechanical, thermal, and chemical properties of materials. However, direct in situ observation remains extremely challenging due the need for high spatial and temporal resolution under elevated temperatures. On the theory side, solidification is a complex phenomenon often studied using phase-field simulations, which rely on empirically fitted parameters and simplified assumptions about interfacial kinetics, limiting their predictive capability. Capturing this process at the atomistic level can yield more fundamental insights, but is hindered by the need for interatomic models that are both accurate and computationally efficient across relevant timescales and length scales. To overcome these challenges, this thesis develops and applies machine-learning interatomic potentials (MLPs) that capture the chemical complexity of metallic alloys, providing a physically accurate and computationally efficient backbone for large-scale atomistic simulations of complex alloy solidification. We first address a foundational challenge in deploying MLPs: the systematic construction of robust and transferable training datasets. Using CrCoNi as a model system, we evaluate various strategies for training MLPs to capture chemical short-range order (SRO), a critical feature in high-entropy alloys, and its effects on materials quantities of relevance for mechanical properties, such as stacking-fault energy and phase stability. It is demonstrated that energy accuracy on test sets often does not correlate with accuracy in capturing material properties, which is fundamental in enabling large-scale atomistic simulations of metallic alloys with high physical fidelity. Based on this analysis we systematically derive design principles for the rational construction of MLPs that capture SRO in the crystal and liquid phases of alloys. The resulting MLPs are validated against experimental measurements on key thermophysical properties, including melting points, heat capacities, thermal expansion coefficients, and enthalpy of SRO formation, confirming their suitability for predictive simulations. With these validated potentials, we then investigate the evolution of SRO during rapid solidification processes. Our simulations reveal that alloy processing can lead to nonequilibrium steady states of SRO that differ qualitatively from any equilibrium configuration. We attribute this behavior to an inherent ordering bias introduced by nonequilibrium dynamics during solidification. These findings suggest that conventional manufacturing processes offer new opportunities to tailor alloy properties by accessing a broader spectrum of nonequilibrium SRO states, expanding the alloy design space beyond the equilibrium spectrum. Finally, we conduct predictive solidification simulations of chemically complex alloys across experimentally relevant growth rates (0.15–2 m/s) , alloy compositions, interface orientations, and undercooling levels. These simulations capture the dynamic build up of solute partitioning at the solid-liquid interface and reveal kinetics-dependent segregation patterns that deviate markedly from equilibrium predictions. The developed framework enables direct evaluation of key kinetic properties under realistic growth conditions, including interface mobility, liquid diffusivity, and solute trapping. Altogether, this thesis develops machine-learning potentials capable of capturing the chemical complexity of metallic alloys with near DFT-level accuracy, and establishes a framework for extracting key kinetic properties through predictive simulations of alloy solidification. When combined with emerging advances in continuum-scale modeling, these results lay the groundwork for truly multiscale investigations of alloy solidification, enabling DFT-level predictive capabilities at scales directly comparable to experimental alloy design and additive manufacturing processes.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164486</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sequential Resource Allocation and Applications in Revenue Management</title>
<link>https://hdl.handle.net/1721.1/164485</link>
<description>Sequential Resource Allocation and Applications in Revenue Management
Zhou, Zijie
Sequential resource allocation is a fundamental problem in operations research, encompassing a wide range of applications where decisions must be made dynamically under uncertainty. This thesis develops new theoretical foundations, explores practical applications, and establishes evaluation methodologies for sequential resource allocation, with a focus on revenue management, robustness and fairness, and experiment design. On the theoretical side, this thesis advances the study of classical network revenue management, a long-standing challenge in dynamic resource allocation. We introduce the first LP-free algorithm, improving the regret bound from O(T ^1/2) to O(T ^3/8)—a significant step toward closing the gap between existing algorithms and the theoretical lower bound of O(1). Additionally, we enhance robustness in sequential resource allocation by developing algorithms that incorporate machine-learned advice, striking a balance between overly conservative worst-case models and overly optimistic stochastic assumptions. Furthermore, we integrate individual fairness into sequential decision-making, ensuring equitable resource allocation without compromising competitive performance. On the application side, we demonstrate the impact of sequential resource allocation in the hospitality management domain. Collaborated with Oracle Lab, we design an online upgrading mechanism that enables hotels to dynamically determine when and at what price to offer room upgrades. Additionally, we propose near-optimal, fast approximation algorithms for this mechanism, achieving a regret bound of O(logT), which is close to the natural lower bound of O(1). We also incorporate our upgrading algorithm to a hotel dataset, and improves more than 20% revenue in 2022. Finally, we introduce new methodologies for evaluating sequential decision-making policies, with a focus on online experiment design. Traditional A/B testing methods struggle with dynamically arriving data, leading to biased or inefficient experimental results. Our pigeonhole experimental design effectively reduces bias and outperforms several well-known experimental design policies, including matched pair design and completely randomized design, making it a more reliable approach for evaluating sequential decision-making strategies. By unifying theoretical insights, real-world applications, and online evaluation frameworks, this thesis contributes to the broader field of sequential resource allocation, providing fundamental advancements with practical implications across revenue management and experimental design.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164485</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aspects of Moiré Quantum Matter</title>
<link>https://hdl.handle.net/1721.1/164484</link>
<description>Aspects of Moiré Quantum Matter
Paul, Nisarga
The advent of moiré quantum matter has newly unified disparate themes in modern condensed matter physics, chief among them band theory, correlations, and topology. This thesis investigates how the interplay between these foundational elements leads to novel electronic phenomena uniquely enabled by moiré superlattices. We focus on modulated Landau levels, which is one of the simplest settings with all three of band dispersion, correlations and topology, yet is rich enough to capture much of the interesting phenomena of moiré quantum matter. We characterize emergent quantum phases that are newly unlocked by the moiré regime. Specifically, we discuss directional localization, formation of Hall crystals with tunable Chern numbers, and novel fractional Chern insulator collective mode physics in the context of modulated Landau levels. We also show that a class of models comprising itinerant electrons strongly coupled to skyrmion-like magnetic textures, closely connected with moiré transition metal dichalcogenides in which the fractional quantum anomalous Hall effect was observed, can host flat Chern bands, emergent Landau levels, and zero-field non-Abelian topological order. This thesis provides a framework for the study of the essential features of moiré quantum matter and demonstrates how moiré systems provide unprecedented opportunities to explore, design, and manipulate strongly correlated topological quantum matter.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164484</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimized Bayesian Analysis Framework for the KATRIN Experiment</title>
<link>https://hdl.handle.net/1721.1/164483</link>
<description>An Optimized Bayesian Analysis Framework for the KATRIN Experiment
Xu, Weiran
Neutrinos, which were originally predicted to be massless within the Standard Model of particle physics, have been confirmed to possess non-zero masses through the discovery of neutrino flavor oscillations. These oscillations precisely measure mass-squared splittings between neutrino mass eigenstates, establishing lower limits for the effective electron-neutrino mass at 0.009 eV for normal mass ordering and 0.050 eV for inverted mass ordering. However, the absolute neutrino mass scale remains a fundamental open question in both particle physics and cosmology.&#13;
&#13;
Precise spectroscopy of beta-decay spectrum provides a model-independent probe of the absolute neutrino mass via decay kinematics. The KArlsruhe TRItium Neutrino (KATRIN) experiment, utilizing a Magnetic Adiabatic Collimation and Electrostatic (MAC-E) filter spectrometer, sets the world's tightest upper limit of m_v &lt; 0.45 eV (90% C.L.) based on its first five measurement campaigns. KATRIN is scheduled to complete its 1,000-day data-taking period by the end of 2025, targeting a final sensitivity of m_v &lt; 0.3 eV}. Future improvements on neutrino mass measurements will depend on advances in differential detection techniques and the development of atomic tritium sources.&#13;
&#13;
This thesis presents an optimized modeling of the KATRIN beta spectrum and a comprehensive analysis of the first five measurement campaigns. An improved framework for computing the theoretical beta spectrum and the KATRIN response function is developed to address the complexities arising from the asymmetric field configurations in the main spectrometer. Benefiting from a computational speedup of four orders of magnitude and improved numerical stability, frequentist best-fit values for individual campaigns are reported, together with an upper limit on neutrino mass using the Lokhov-Tkachov confidence belt construction method.&#13;
&#13;
Parallel Bayesian analyses are conducted on the same dataset, yielding an independent and complementary statistical interpretation of the experimental results. Posterior distributions for the squared neutrino mass are sampled for each campaign under a flat prior on m²ᵥ using the parallel Stretch-Move algorithm, and are subsequently combined with a novel approach developed in this work to enhance computational efficiency. Convergence of each Markov chain is assessed through autocorrelation time analysis, and the robustness of the results is validated through cross-team comparison and consistency checks with profile likelihood. The Bayesian results reported here enable straightforward integration with constraints from oscillation measurements and cosmological observations, and the methodologies developed in this work are directly applicable to the final KATRIN dataset, providing a foundation for future neutrino mass analyses and searches for physics beyond the Standard Model.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164483</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redox-Mediated Processes Toward Modular Electrochemical Systems</title>
<link>https://hdl.handle.net/1721.1/164482</link>
<description>Redox-Mediated Processes Toward Modular Electrochemical Systems
Mallia, Christopher T.
Electrochemical technologies offer an attractive path toward a sustainable future where conventional methods of storing energy or producing critical materials are increasingly coupled to renewable electricity generation. To enable such a future, it is imperative that we have strong foundational understanding of electrochemical reactions that are useful to our needs. Redox flow batteries (RFBs) have emerged as a promising architecture for large scale storage of electricity to bridge the gap when renewable generation is unavailable. These devices operate by storing charge in the form of redox-active species that are dissolved into an electrolyte, and subsequently passed through an electrochemical cell to either store or release electrical energy. An extension of the concept of RFBs toward more general applications is to use the dissolved redox-active species to drive a reaction with another material, either to increase the energy storage density through an electrochemically active charge-dense material, or to drive a useful chemical reaction. This extension is termed a redox-mediated (RM) process, and inherits many of the complexities and intricacies of conventional electrochemical technologies, specifically that of RFB-type devices. The subject of this thesis is the development of knowledge and techniques for studying RM processes toward practical embodiments. While technical implementations of this concept are still nascent, many promising early results have been found in devices that use redox-mediated reactions to store electricity. Despite this, progress is frequently hindered by a lack of foundational knowledge from which to ideate better systems, and techniques to experimentally determine underlying physics. First, I establish the development of the RM concept over the past years as primarily through proof-of-concept electrochemical reactors which mimic RFBs. Second, we establish that the underlying nature of some RM reactions can be quantified and understood through corrosion principles, which guide our intuition for selecting chemistries and operating conditions. Third, I demonstrate that the behavior of many desirable RM chemistries is intrinsically coupled to passivation phenomena, and that this must be accounted for in reaction design. Fourth and finally, I provide experimental and practical guidance for researchers in this field, coupled with the design of some apparatus and techniques useful for characterizing RM reactions in specific and electrochemical processes in general. This body of work is broadly intended to advance understanding of electrochemically active interfaces and enable technology concepts which promote a sustainable future.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164482</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Modeling of Chemical Reactivity for Sustainability</title>
<link>https://hdl.handle.net/1721.1/164481</link>
<description>Predictive Modeling of Chemical Reactivity for Sustainability
Singhal, Avni Priya
Predicting and controlling chemical reactivity is key to sustainable material and process design. However, modeling reactivity at scale remains challenging due to the computational demands of quantum chemical methods and the complexity of reaction mechanisms. This thesis explores how high-throughput computational approaches, rooted in quantum chemistry and enabled by automation, can be used to interrogate reactivity across large chemical spaces. We focus on two domains where reactivity governs process efficiency and sustainability: solvent-based carbon capture and polymer, specifically thermoset, manufacturing.&#13;
&#13;
We first investigate pi-conjugated heterocyclic nucleophiles as alternative carbon capture solvents to address the high regeneration energy and degradation rates of conventional amine-based systems. We combine synthetic template-based library enumeration, density functional theory (DFT), and machine learning models to evaluate binding energies, capture capacity, regeneration thermodynamics, and oxidative stability. Structure–property analysis reveals design strategies to enhance capture strength while balancing tradeoffs with desorption temperature and degradation resistance.&#13;
&#13;
We next focus on designing monomers for frontal ring-opening metathesis polymerization (FROMP), a polymerization mode that enables rapid, energy-efficient manufacturing of polymers. This self-propagating process harnesses exothermic reactions to sustain a polymerization front without continuous external heating, but it requires monomers with a finely tuned balance of thermodynamic and kinetic parameters. We develop a multi-level screening pipeline that integrates DFT-calculated properties with a reaction-diffusion model to predict front behavior directly from the atomistic structure of the monomer. We experimentally validate a preliminary pipeline, identifying a new class of FROMP-capable furan-benzyne monomers, and uncover additional candidates from unexplored chemical spaces that overcome limitations of known systems. &#13;
&#13;
Together, these studies demonstrate how high-throughput, mechanism-informed modeling can guide the discovery of molecules and materials that meet complex reactivity and performance criteria.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164481</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park</title>
<link>https://hdl.handle.net/1721.1/164480</link>
<description>When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park
Zhao, Celina
In December 2016, China launched the Giant Panda National Park (GPNP). A massive ecological initiative aimed at safeguarding its beloved national symbol and international icon of conservation, the park marked an unequivocal win for giant pandas. But for the 100,000 people already living in and around the borders, the outcome was not as clear. &#13;
The GPNP seeks to establish a harmonious balance between biodiversity protection and human development. But the vast amount of land covered by the park means not all places are equally primed to achieve that goal. A handful of communities have been designated as exclusive entrance communities, with lavish funding to become the face of the national park. In others, a persistent question simmers: Are pandas more important than people? &#13;
Central to this story is how individuals are adapting to and reimagining their futures. Rather than a binary of winners and losers, the GPNP has sparked a wide range of human responses -showing that the path to a sustainable future between people and pandas is far from black and white.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164480</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Sequence Landscape of Bacterial Genes is Shaped by Long-Range mRNA Folding</title>
<link>https://hdl.handle.net/1721.1/164479</link>
<description>The Sequence Landscape of Bacterial Genes is Shaped by Long-Range mRNA Folding
Gill, Manraj Singh
An evolutionary selection for optimal expression of genes in regulatory networks has led to discernable sequence patterns in bacterial genomes observed in nature. Such patterns result from gene regulatory strategies that leverage sequence-dependent interactions with key cellular machineries and regulatory molecules. While numerous regulatory strategies that shape bacterial gene sequence have been characterized, predicting functional consequences from sequence alone remains challenging due to the sheer vastness of the possible sequence space. Moreover, the primary gene sequence encodes information on secondary and tertiary topologies that the molecules of the central dogma can fold into. Specifically, though local messenger RNA (mRNA) structures are known to regulate bacterial gene expression, the role of long-range mRNA folding remains unclear despite the predicted prevalence of such interactions across mRNAs. In bacteria, a major regulator of mRNA decay and translation rates is accessibility of the ribosome binding site (RBS) to the ribosome. Sequences in the mRNA’s 5´ untranslated region (UTR) complementary to the RBS can decrease gene expression by base pairing and occluding ribosomes from binding. To determine whether such antagonistic sequences are also the primary determinants of sequence choice along the rest of the mRNA transcript, we measured the effect of all possible 8-nucleotide substitutions (65,536 variants) on mRNA levels when placed in multiple positions along a bacterial transcript. We find that, while the vast majority of substitutions in the middle of genes negligibly affect RNA level, 8mers with complementarity to parts of the RBS exhibit the strongest effects by increasing RNA degradation rates up to 4-fold. RBS-complementary sequences also decrease translation initiation rates when placed in a coding sequence, and are able to occlude ribosome binding even when they are located hundreds of nucleotides away from the start codon. The inhibitory effect of such secondary structures on gene expression likely explains a strong selection against sequences complementary to conserved parts of RBSs throughout coding sequences of genes from diverse bacterial genomes, which we uncover through computational analysis. Together, this thesis reveals the widespread impact of RNA intramolecular interactions in vivo on both mRNA stability and translation and uncovers a key constraint on gene sequences.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164479</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Materials for Non-Compressible Torso Hemorrhage and Internal Bleeding</title>
<link>https://hdl.handle.net/1721.1/164478</link>
<description>Engineering Materials for Non-Compressible Torso Hemorrhage and Internal Bleeding
Hong, Celestine Jia Huey
Non-compressible torso hemorrhage (NCTH) and internal bleeding results in a significant number of preventable casualties worldwide among civilians and in the field. In particular, internal bleeding can only be diagnosed through changes in vital signs and then through imaging modalities that may only be available in a hospital setting. Over the past few decades, researchers in the field have sought to address these needs by developing hemostats that can rapidly expand, bind, or seal an exposed wound, or interact with wound-specific components when delivered intravenously to enhance preexisting hemostatic processes.&#13;
&#13;
The first part of this thesis investigates the effect of hemostatic nanoparticle size on their interactions with platelets. Small nanoparticles were observed to result in an increased percentage of specifically-bound single platelets under flow and intermediate nanoparticles were observed to result in the greatest degree of platelet recruitment to a platelet-collagen surface. Large nanoparticles were observed to result in the most nanoparticle mass bound to a surface, the shortest circulation time and retention, and the highest pulmonary accumulation. Ultimately, intermediate nanoparticles were shown to result in the most significant increase in survival relative to the saline control in a lethal inferior vena cava (IVC) injury model (84.6% vs 26.7%), as well as the greatest accumulation at the injured IVC relative to uninjured vessel controls. &#13;
&#13;
Subsequently, the intermediate nanoparticles from the prior study were functionalized with bio-orthogonal click-crosslinkable azide groups to achieve targeted crosslinking behavior. Commercial multiarm PEG functionalized with the corresponding clickable moiety, dibenzylcyclooctyne (DBCO), and DBCO-PEG-b-PLGA nanoparticles were delivered as the second part of this two-component system. This system was demonstrated to increase platelet recruitment, and  decrease fibrin loss during plasminolysis in vitro. When challenged in a mouse liver resection model, the two-component system resulted in significantly increased survival relative to the nanoparticle-only system and higher accumulation in the remnant liver. &#13;
&#13;
Finally, a charge-inverting polymer was synthesized through controlled radical polymerization. The material was demonstrated to undergo rapid charge inversion when exposed to physiological pH, resulting in the near-complete lift-off within a minute of a layer-by-layer drug film into the dermis when coated on microneedles. This versatile release platform could be coated on wound dressings to facilitate the release of therapeutics to aid in healing, or other applications involving charged films. &#13;
&#13;
In sum, this thesis has investigated several new materials and assays for the treatment of traumatic hemorrhage, opening potential avenues for the development of more effective hemostats.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164478</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Nonconvex and Robust Optimization</title>
<link>https://hdl.handle.net/1721.1/164477</link>
<description>Advances in Nonconvex and Robust Optimization
Koukouvinos, Theodoros
Nonconvex optimization presents significant challenges, as identifying the global optimum is often difficult. This thesis introduces novel algorithms to find the exact solution of a broad class of nonconvex optimization problems. The thesis is structured into four parts. In Chapter 2, we propose a novel method for solving nonconvex optimization problems, in which the nonconvex components are sums of linear times convex (SLC) functions. We introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the original nonconvex optimization problem. We then incorporate RPT within branch and bound to obtain the global optimal solution of the nonconvex optimization problem. By using the RPT, we obtain a convex relaxation by forming the perspective of each convex function and linearizing all product terms with newly introduced variables. To further tighten the approximation, we pairwise multiply constraints. Therefore, in Chapter 3, we analyze all possibilities of multiplying conic constraints, a very wide class of constraints. Further, we delineate methods for deriving new, valid linear and second-order cone inequalities for pairwise constraint multiplications involving the power cone and exponential cone, thereby enhancing the strength of the approximation. In Chapter 4, we address nonconvex optimization problems that involve polynomials. We derive valid SLC decompositions for polynomials, in which the linear functions are inequalities of the feasible region and the convex functions are quadratics. We prove the existence of such SLC decompositions for arbitrary degree polynomials. Further, out of the many possible SLC decompositions, we obtain the one that results in the tightest lower bound. Finally, in the numerical experiments we show that our method often outperforms state-of-the-art approaches for polynomial optimization. In Chapter 5, we propose a robust optimization framework that immunizes some of the central linear algebra problems in the presence of data uncertainty. Namely, we formulate linear systems, matrix inversion, eigenvalues-eigenvectors and matrix factorization under uncertainty, as robust optimization problems using appropriate descriptions of uncertainty. We show that for both linear systems and matrix inversion, the robust approach leads to more accurate solutions than the nominal, in the case of nearly singular matrices.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164477</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabricating and Tailoring Halide Perovskites for Photovoltaic Applications</title>
<link>https://hdl.handle.net/1721.1/164476</link>
<description>Fabricating and Tailoring Halide Perovskites for Photovoltaic Applications
Kadosh Zhitomirsky, Tamar
Green energy is a contemporary global concern, and research of materials for solar energy harvesting is the heart of potential solutions for the energy crisis. Halide perovskites are leading candidates to replace silicon in next generation solar cells. This thesis focuses on halide perovskite materials, aiming to understand their structure, electronic and ionic properties and photo-activity; and to re-direct their fabrication techniques to address global market needs and requirements. In this work we developed alternative, vapor-based fabrication techniques, based on manufacturing-compatible, safe, rapid and scalable processes, that have the potential to improve material stability and efficiency.&#13;
Vapor Transport Deposition (VTD) is investigated as a promising fabrication method for thin film halide perovskites and beyond. We explored the deposition parameter space and elucidated relationships and trends regarding composition, structure and deposition rate. We examined the morphology, crystal phase formation, optical and electrical properties, and finally the performance of the deposited films when incorporated into solar cells.&#13;
We begin by exemplifying the viability of vapor transport co-deposition in fabricating active perovskite films, utilizing methylammonium lead iodide (MAPbI3) as a simplified model system. We then design an improved version of the vapor transport deposition system and transition to the more technologically attractive perovskite composition formamidinium lead iodide (FAPbI3). Learning from previous attempts to fabricate this material, we developed a novel technique that we call Hybrid two-step vapor-solution deposition in which we use VTD to deposit the inorganic&#13;
4&#13;
precursor, not readily dissolved in industry acceptable solvents, and then react it with a solution of the organic precursors dissolved in a benign solvent. This technique allowed us to fabricate functioning FAPbI3 based solar cell devices, in a safe, fast-paced, scalable and manufacturing compatible fashion. The deposition rate is significantly influenced by chamber pressure and source temperature, and by controlling all deposition parameters, we systematically reached rates of up to 1200 nm/min, that is orders of magnitude faster than current comparable techniques. We found the technique to be reproducible, yielding 13% efficient devices, with champion efficiencies of up to 15.3%. Based on the proposed novel fabrication process, we believe it offers an avenue for further improvement in solar cell stability and efficiency.&#13;
CsPbBr3, a fully inorganic halide perovskite, also shows great promise as a photo and gamma ray detector and like the other halide perovskites is known to support halide ion conductivity that contributes to device instability and reduced sensitivity to irradiation. We choose this as a model system to apply concepts from defect chemistry and demonstrate the ability to measure and manipulate the ionic conductivity in the material by stoichiometry control and doping.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164476</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Membrane protein conformational dynamics and ligand-binding interactions in bacterial glycoconjugate biosynthesis</title>
<link>https://hdl.handle.net/1721.1/164475</link>
<description>Membrane protein conformational dynamics and ligand-binding interactions in bacterial glycoconjugate biosynthesis
Higinbotham, Hugh
Membrane associated proteins are an essential component of the complex biochemistry that is carried out at the membrane interface and perform essential functions for cellular life. Biophysical characterization of protein structure-function relationships faces a unique set of challenges due to the constraints of phospholipid bilayer chemistry and geometry. Advances in x-ray crystallography and cryo electron microscopy have made progress in this regard, but dynamic structural features remain difficult to study. Small membrane proteins, such as those responsible for bacterial glycosylation, remain challenging to structurally characterize at all. Bacterial glycan synthesis pathways are essential for cell function yet highly variable between strains, making them promising systems for targeted antibiotic development. Many pathways have initiating SmPGTs that show incredible specificity for minute changes in glycan chemistry despite being small enough to streamline many computational methods, which makes them ideal model systems for developing multidisciplinary strategies to study membrane protein dynamics. This thesis presents a strategy that employs structural bioinformatics in Chapter 2, molecular dynamics simulation (MD) in Chapter 3, and single-molecule FRET microscopy (smFRET) in Chapter 4 to observe the ligand-dependent conformational dynamics of integral membrane proteins in situ. It focuses on representative members of the small monotopic phosphoglycosyl transferase (SmPGT) superfamily, which catalyze transfer of a phosphosugar from a soluble nucleotide-sugar donor to a membrane-embedded polyprenol phosphate acceptor in the initiating step of glycoconjugate biosynthesis in prokaryotes. The pipeline is employed to confirm the role of SmPGT conformational dynamics in substrate binding and informs the design of non-hydrolyzable substratemimetic inhibitors. Chapter 5 further sets the stage for the use of structural bioinformatics and molecular simulation to characterize subsequent glycosyl transferase (GT) enzymes down pathway and presents initial results characterizing inter-protein cooperative interactions. The integrated approach to incorporate computational and experimental characterization methods has significantly contributed to the understanding of SmPGT structure-function relationships and opened up new directions of inquiry into specific PGTligand interactions, the development of new inhibitory compounds, and the role of interprotein interactions in bacterial glycan synthesis.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164475</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Genomic and Image-Based Screening Approaches for Probing Host-Pathogen Interactions</title>
<link>https://hdl.handle.net/1721.1/164474</link>
<description>Functional Genomic and Image-Based Screening Approaches for Probing Host-Pathogen Interactions
Carlson, Rebecca J.
Host-pathogen interactions represent a complex interplay between hosts and pathogens that can evolve over millions of years. Interactions between bacteria or viruses and human cells, and the resulting evolved antipathogenic signaling pathways, are processes responsible for pathologies ranging from infectious diseases to autoimmune conditions and cancer. In addition, engineered designs inspired by pathogen interactions with hosts are increasingly being used to both treat and diagnose many pathologies that need not originate from infection with a pathogen. Therefore, it is critical to build and deploy scalable tools to better understand host-pathogen dynamics in order to both better treat conditions where pathogens or antipathogenic signaling contribute directly to disease pathology as well as to engineer new treatments to address a broader range of disease states.&#13;
&#13;
In this thesis, I describe approaches to leverage functional genomics and image-based screening to perturb and profile host-pathogen interactions, including responses to two RNA viruses, Sendai virus and Ebola virus. These provide case studies highlighting the utility of high-content image-based screening for revealing new genes regulating predefined phenotypes of interest as well as for generating single-cell imaging profiles that can be used to infer new genetic functions and phenotypic states directly from screening data without a priori specification. I also highlight an example of a genetic screen that revealed a robust negative result, leading to hypothesis and validation of a novel function of the STING protein as a proton channel.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164474</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the field control operation of railway motors : a thesis</title>
<link>https://hdl.handle.net/1721.1/164458</link>
<description>A study of the field control operation of railway motors : a thesis
Davis, Stanley W.
            (Stanley Whitcomb)
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1925; Includes bibliographical references (leaf 91).
</description>
<pubDate>Thu, 01 Jan 1925 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164458</guid>
<dc:date>1925-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reviewing I.S. : how to handle legacy systems?</title>
<link>https://hdl.handle.net/1721.1/164457</link>
<description>Reviewing I.S. : how to handle legacy systems?
Orlando, Ricardo,
            1966-
Thesis: S.M.M.O.T., Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 1999; Includes bibliographical references (leaves 100-106).
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164457</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors</title>
<link>https://hdl.handle.net/1721.1/164456</link>
<description>The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors
Trapp, Donald L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1962; Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 135-136).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164456</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The politics of metropolitan transportation.</title>
<link>https://hdl.handle.net/1721.1/164455</link>
<description>The politics of metropolitan transportation.
Colcord, Frank Carlton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1964
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164455</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design of a control system for the terminal phase of a satellite rendezvous</title>
<link>https://hdl.handle.net/1721.1/164454</link>
<description>The design of a control system for the terminal phase of a satellite rendezvous
Hollister, Walter M.,
            1930-
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 47).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164454</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compliance in a gyroscope gimbal</title>
<link>https://hdl.handle.net/1721.1/164453</link>
<description>Compliance in a gyroscope gimbal
Graham, James William.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1958
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164453</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On hypergraphs and hypergeometries.</title>
<link>https://hdl.handle.net/1721.1/164452</link>
<description>On hypergraphs and hypergeometries.
Helgason, Thorkell.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1971; Vita.; Bibliography: leaves 158-159.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164452</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noise analysis of circuit models representing maser operation.</title>
<link>https://hdl.handle.net/1721.1/164451</link>
<description>Noise analysis of circuit models representing maser operation.
Hempstead, Robert Douglas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1965; Bibliography: leaves 106-108.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164451</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cocoa in the Ghanaian economy.</title>
<link>https://hdl.handle.net/1721.1/164450</link>
<description>Cocoa in the Ghanaian economy.
Bateman, Merril Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164450</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Even denominator quantum numbers and termination of the fractional series in the fractional quantum hall effect</title>
<link>https://hdl.handle.net/1721.1/164449</link>
<description>Even denominator quantum numbers and termination of the fractional series in the fractional quantum hall effect
Willett, Robert L.
            (Robert Lee)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1989; Includes bibliographical references (leaves 6-7).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164449</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard</title>
<link>https://hdl.handle.net/1721.1/164448</link>
<description>Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard
Ferguson, William Lloyd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1979; Bibliography: leaves 194-195.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164448</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigations in the theory of quantum corrections to classical solutions of the Yang-Mills equations</title>
<link>https://hdl.handle.net/1721.1/164447</link>
<description>Investigations in the theory of quantum corrections to classical solutions of the Yang-Mills equations
Callias, Constantine John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164447</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave equations, particles and chronometric geometry.</title>
<link>https://hdl.handle.net/1721.1/164446</link>
<description>Wave equations, particles and chronometric geometry.
Orsted, Bent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164446</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planteez(tm) : business plan and preliminary research</title>
<link>https://hdl.handle.net/1721.1/164445</link>
<description>Planteez(tm) : business plan and preliminary research
Sanchez, Manuel A.
            (Manuel Andres),
            1979-
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2001; Includes bibliographical references (p. 15).
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164445</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic engineering of controlled, localized oligonucleotide delivery systems for wound angiogenesis</title>
<link>https://hdl.handle.net/1721.1/164410</link>
<description>Systematic engineering of controlled, localized oligonucleotide delivery systems for wound angiogenesis
Berger, Adam G.
The standard of care for diabetic wounds has remained relatively unchanged for decades, resulting in patients with wounds that do not heal on meaningful time scales, referred to as ulcers, and high rates of recurrence for patients whose wounds do heal. This common complication of diabetes decreases quality of life, increases mortality, and raises health care costs. New paradigms to treat these wounds remains a formidable but critical challenge.&#13;
&#13;
Addressing diabetic ulcers at the molecular level may decrease healing time and prevent recurrence. Impaired blood vessel formation, or angiogenesis, in diabetic ulcers is an important target pathway. Angiogenesis is needed to bring oxygen, nutrients, signaling cues, and cells to newly formed tissue while removing waste. Nucleic acid oligonucleotide therapies, such as small interfering RNAs (siRNAs) or microRNA inhibitors (anti-miRs), that regulate gene expression at the post-transcriptional level, hold particular promise for promoting angiogenesis and wound healing; however, the large size and negative charge of these therapies require drug carriers to mediate their biological effect.&#13;
&#13;
In this thesis, we leverage sequential electrostatic adsorption of oligonucleotide therapy and polyelectrolytes into thin film coatings on commercial wound dressings through the layer-by-layer (LbL) process. These dressings package oligonucleotide, enhance its transfection efficacy, and control its temporal release locally to the wound bed. After initial validation experiments, we sought to systematically understand our drug carrier system and use this insight to engineer better wound dressings. First, we developed a proof-of-concept anti-miR-coated dressing and showed its efficacy in promoting both wound closure and sex-dependent angiogenesis. We found that therapy released from coated dressings had a preferential association with different wound cell types, particularly endothelial cells. We then sought to uncover how changes in the oligonucleotide structure itself may alter interactions with transfection polymers in thin film coatings. We found that binding with certain polyelectrolytes differed based on whether the therapy was a flexible single stranded anti-miR or a more rigid double stranded helix siRNA. We also showed how chemically modified nucleotides, such as locked nucleic acid and 2’-O-methyl RNA, can modulate affinity to polyelectrolytes and ultimately impact transfection efficacy. We also elucidated how physicochemical properties of the hydrolysable transfection-enhancing poly(β-aminoester) polymer mediate its efficiency in transfecting oligonucleotide therapy. We demonstrated that a more hydrophobic polymer enhanced transfection efficacy through its ability to facilitate permeation of biological barriers. Finally, we identified how modulation of the anionic excipients contained in these thin film coatings can be leveraged to vary the release kinetics from coated wound dressings. We engineered formulations that released on a fast or slow time scale. We observed that while both release time scales promoted efficacy in wound closure, they did so through potentially different mechanisms despite the same putative pro-angiogenic anti-miR therapy.&#13;
&#13;
In sum, this thesis elucidates how physicochemical properties and formulation of coated wound dressings alter their interfacial effects with biological systems. We use this knowledge to rationally design better drug carriers that can deliver pro-angiogenic oligonucleotide therapeutics to the wound bed. The findings have broad applications in the delivery of nucleic acid therapies for a wide host of diseases where local delivery to the injured tissue could prove beneficial. Ultimately, we also advance our pro-angiogenic coated wound dressing strategy towards clinical translation. Our strategy has the potential to provide a new, targeted therapeutic paradigm to help those suffering from diabetic ulcers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164410</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atmospheric Impacts of Hydrogen as an Aviation Fuel</title>
<link>https://hdl.handle.net/1721.1/164348</link>
<description>Atmospheric Impacts of Hydrogen as an Aviation Fuel
Gibney, Evan M.
Hydrogen is being investigated as a promising zero-carbon aviation fuel, offering the potential to eliminate direct CO₂ emissions while being produced with low lifecycle greenhouse gas emissions. Despite these benefits, there are additional indirect climate and air quality costs associated with direct hydrogen emissions which are often overlooked. We quantify the perturbation in the atmospheric composition associated with the introduction of hydrogen-fueled aircraft, broadening the current understanding of the non-CO₂ effects of these fleets. We use the GEOS-Chem High Performance (GCHP) global chemistry-transport model to conduct a spatially discretized, multi-year impact assessment of the atmospheric impacts of hydrogen-fueled aviation. We implement a flux surface boundary condition for hydrogen to provide an improved representation of the soil sink, relative to the default fixed boundary condition. This results in a net surface exchange of-16.7 Tg H₂ per year. Two hydrogen scenarios are evaluated using the updated GCHP implementation, which are representative of a high and low mitigation scenario for direct hydrogen emission rates. For the two scenarios, respectively, we observe increases in the mean atmospheric methane mixing ratio of 3.34 ppbv and 10.7 ppbv, corresponding to an increased methane lifetime of between 0.24% and 0.77%, respectively. The increased methane lifetime as well as in-situ oxidation of stratospheric hydrogen results in an increased stratospheric water vapor burden of 0.42 Tg and 2.3 Tg (or 0.052% and 0.28%) for the high and low mitigation scenarios, respectively. Additionally, we show the perturbation to tropospheric ozone levels to be between-0.047% and +0.30%, where the decreased ozone results from the removal of NOₓ emissions associated with fuel cells and low hydrogen emission rates. This analysis provides the foundation for understanding the implications of potential future hydrogen-based aviation fleets on climate and air quality.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164348</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet</title>
<link>https://hdl.handle.net/1721.1/164347</link>
<description>A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet
Ocharoenchai, Nanticha
Discussions about climate change and carbon sequestration have largely revolved around plant structures we can easily see, like leaves that absorb CO₂ for photosynthesis and woody trunks that store carbon as biomass. Carbon credits that companies and consumers buy to compensate for emissions they’ve produced are primarily calculated based on these parts, as are models that predict climate change impacts. But researchers are now beginning to understand that what we see aboveground is only part of the equation. The other part lies beneath our feet in an intricate, expansive, covert realm where plant roots, microbial communities and soil dynamics interact. These belowground systems are crucial for cycling carbon through the Earth and regulating the climate, but relatively little is known about them compared to aboveground systems. This is especially true in tropical regions, where one-third of the world’s terrestrial carbon storage lies. However, these systems are evolving quickly with climate change, contradicting what models have previously projected. With so many global decisions based on such models, these uncertainties hold planetary significance for our future. A group of scientists is climbing an uphill battle, racing against time to understand this understudied field.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164347</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Winter</title>
<link>https://hdl.handle.net/1721.1/164346</link>
<description>Engineering Winter
White, Mackenzie
As winters warm and snowfall becomes less reliable, ski resorts worldwide increasingly depend on artificial snow to stay open. Snowmaking, once a stopgap, has become the backbone of entire seasons in a sprawling choreography of pumps and pressurized mist designed to hold trails together. At resorts like Vermont’s Bromley Mountain, snowmakers work through the night, drawing millions of gallons from limited reservoirs and operating within narrowing windows of cold air. What emerges is a portrait of winter in transition: less predictable, more expensive, increasingly manufactured. The efforts to preserve winter recreation carry growing costs in energy, water, and equitable access. Many smaller, independent ski areas struggle to meet the demands of climate adaptation, while larger resorts expand their operations, widening the divide in who can afford to sustain operations. In the American West, where rivers depend heavily on snowpack melt, the spread of snowmaking ties winter recreation to a water system already under immense strain. As artificial snow becomes the norm, winter is increasingly a season bought, built, and rationed, raising the question of whether attempts to keep the season alive are accelerating the changes that threaten to erase it.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164346</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Influence of Electronic Structure and Lattice Dynamics on Oxygen Ion Transport in Solid-State Ionic Conductors</title>
<link>https://hdl.handle.net/1721.1/164345</link>
<description>Influence of Electronic Structure and Lattice Dynamics on Oxygen Ion Transport in Solid-State Ionic Conductors
Vivona, Daniele
Solid-state oxygen ion conductors are crucial for electrochemical devices such as separation membranes, solid-oxide electrolyzers, fuel cells, and sensors, serving as a technological link between renewable energy generation and consumption. Currently, these conductors are limited by slow transport rates and high operational temperatures, which pose challenges and increase costs. Developing faster conductors that operate at lower temperatures requires reducing activation energy and enhancing the pre-exponential factor in the Arrhenius equation of conductivity. However, our understanding of the fundamental processes in oxygen ion transport and methods to improve oxygen ion conductivity remain limited. This thesis focuses on understanding the fundamental mechanisms that regulate oxygen ion transport. First, the migration energy barrier in perovskite oxides is linked to an electronic energy penalty from local charge screening near the hopping ion. The energy of local electronic states is identified as a fundamental descriptor of the migration barrier. Next, migration entropy and phonon density of states (DOS) are highlighted as the main factors regulating the pre-exponential factor of oxygen ion conductivity across different materials. The phonons of oxygen ions near the hopping ion significantly contribute to migration entropy, suggesting that migration entropy can be tuned by designing the phonon dynamics of these atoms. These results imply that a widely observed correlation between increasing pre-exponential factors and activation energy arises from coupling local electronic energy states and phonons. The results are extended to the formation of oxygen vacancies and interstitials in perovskite and RuddlesdenPopper oxides. We find that defect formation energy rises with defect formation entropy, which is linked to electronic energy states interacting with phonons. In perovskite oxides, lower vacancy formation entropy is correlated with increasing oxygen phonon band center and shortening bond lengths with oxygen vacancy formation. In Ruddlesden-Popper oxides, lower interstitial formation entropy is associated with reduced octahedral tilting and local phonon changes. This thesis establishes a theoretical foundation for treating migration entropy and defect formation entropy as design variables in next-generation ionic conductors. By highlighting the impact of electronic structure and lattice dynamics on energy barriers and entropic drivers, the findings suggest new pathways for material design through the strategic separation of these factors and the intelligent design of lattice moieties in oxygen ion transport environments.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164345</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>IP Networks Over Heterogeneous Embedded Serial Links</title>
<link>https://hdl.handle.net/1721.1/164271</link>
<description>IP Networks Over Heterogeneous Embedded Serial Links
Perry, Nathan
The Internet Protocol (IP) provides a number of key benefits to networked devices: it serves as a "narrow waist" enabling functional modularity by decoupling lower-layer devices from application behavior, it provides a notion of transitive connectivity and a number of standardized methods to achieve it, and most importantly, it is ubiquitous, enabling almost all networked applications to mutually communicate.&#13;
&#13;
Many embedded microcontrollers cannot take advantage of the benefits of IP because they lack the dedicated networking hardware which is as a practical matter required to interact with nontrivial networks. I observe that multihop point-to-point IP networks can in principle be constructed over the communication media that microcontrollers commonly do have, such as UARTs, I2C, SPI, and CAN bus, but software support is lacking to make this networking approach accessible.&#13;
&#13;
Therefore, this thesis develops and evaluates interstice, a platform-independent, open-source software library designed to enable the flexible implementation of modular packet forwarders in userspace. It can be used to interconnect devices and their IP stacks across a variety of conventional&#13;
and unconventional links. Interstice exposes a reprogrammable, dynamically-updatable packet-forwarding strategy, enabling forwarder nodes in principle to act as hubs, bridges, full routers, or implement firewalls or NAT, as application requirements and platform constraints permit.&#13;
&#13;
This approach enables benefits for modular, networked systems of microcontrollers which need to talk to the outside world: using IP enables internal microcontrollers to communicate with external devices such as PCs and smartphones without the need for application gateways. Further, to the extent that such networks are runtime-reconfigurable, features of IP such as address assignment, dynamic routing, and link-agnosticity can be incredibly beneficial.&#13;
&#13;
Interstice is evaluated here primarily against networks of various types of serial links (UART, I2c, CAN) speaking PPP, selected to demonstrate utility of the approach to connect embedded devices lacking dedicated networking peripherals, and further that link drivers can be specialized to take advantage of the specific characteristics of each link. The approach is showcased in application scenarios including a networked milling machine, and is analyzed for a number of performance metrics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164271</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives</title>
<link>https://hdl.handle.net/1721.1/164270</link>
<description>BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives
Li, Yuqing Lucy
Imagination is the origin of reality. Cultivating new infrastructural and ecological imaginaries is crucial to addressing the climate crisis. Where is the space to prototype new social and technological relations? Transient electronics is an emerging field in advanced materials focused on making electronics that don’t last. Devices are designed to be transient for biomedical, environmental monitoring, or energy storage applications. It is a fascinating and unconventional direction that advances the area of biocompatibility, redefining waste and time-programmable decay {Making electronics that, 2022}. However, in a manufacturing system that fundamentally favors the inert and invariant, transient properties can be precisely the qualities that make adaptation most challenging, often failing at the very stage of imagination. Taking inspiration from transient electronics, this thesis consists of a set of novel biomaterials, a workflow, and three fictional stories to enrich our imagination and instill agency amidst entangled humanitarian, ecological, and technological crises. BioLIG is a material for prototyping accessible and compostable electronics. It uses laser-induced graphene as an organic, bio-derived conductor and affordable biomaterials as the substrate. Three sheets and two inks make up a toolkit to create biocomposites with different properties, colors, and textures specifically designed for prototyping sensors and circuits with transient behaviours. Through a series of characterisations, BioLIG is evaluated and demonstrates that with one material, its electrical performance is on par with synthetic substrates. However, the goal is not to create a replacement material but to prototype new social and technological relations to transient materials. Through a questionnaire, I collected stories, ideas, and questions from makers, designers, and artists for BioLIG and used those as the basis for imagination. In a speculative house, on three floors, three stories unfold of a hoarder, a city forester, and a family living in a time with a leap in our relationship to fabrication, to electronics, and to decay.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164270</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Being. Creative. Together. Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI</title>
<link>https://hdl.handle.net/1721.1/164269</link>
<description>Being. Creative. Together. Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI
Dhariwal, Manuj
As Artificial Intelligence (AI) becomes increasingly interwoven into our creative, social, and learning experiences, we must ask: Will these technologies deepen our connection to the timeless human experiences of Being, Being Together, and Being Creative Together—or will they pull us apart, leaving us more anxious and isolated? In an era where AI systems are increasingly framed as our “co-creators” and “companions,” enabling hyper-personalized yet hyper-isolated interactions, this dissertation reclaims the prefix ‘co-’ as fundamentally interhuman—introducing a set of new paradigms that center human connection, co-creativity, and calm in the design of technologies.&#13;
&#13;
Central to this work, we’ve developed CoCo (coco.build), a general-purpose, real-time co-creative learning platform that empowers young people to engage in a wide variety of safe, shared creative experiences with their peers—spanning creative computing, AI education, digital art, writing, and more. Through the platform, we showcase how digital environments can move beyond isolated modes of learning and creating to support multiple ways of being creative together with others—introducing a new paradigm for real-time digital collaboration. We further illuminate how CoCo has been envisioned as a “self-less” social platform that de-emphasizes comparison-based, self-centric metrics (profiles, likes, followers) prevalent in most online systems for young people. &#13;
&#13;
We weave these interconnected ideas into the unifying theme of “Being. Creative. Together.”— values we believe are both timeless and especially timely in the AI era. We supplement the broader design, technical, practical, and pedagogical contributions of this work by sharing insights and feedback from pilots with over 2,000 young people and educators across diverse settings. Ultimately, we see this dissertation as both a contribution and a call—to preserve the human essence of co-, to distinguish it from the useful, powerful, but instrumental AI interactions, and to shape digital environments that nurture our capacity to co-imagine, co-create, co-learn, co-exist, and co-evolve—with and through one another.&#13;
&#13;
Note: This work has been co-developed with Shruti Dhariwal. See https://coco.build/thesis for suggested citation and updates on this work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164269</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward the computational transformation of legal theory and practice</title>
<link>https://hdl.handle.net/1721.1/164268</link>
<description>Toward the computational transformation of legal theory and practice
Mahari, Robert
This doctoral thesis seeks to advance the formalization of computational law as a distinct research discipline. It explores three interwoven key themes: the empirical understanding of legal systems through advanced computational methods; the development of computational tools to augment the capabilities of legal practitioners, thereby expanding access to justice; and the identification of novel, computationally-enabled regulatory interventions. This research directly confronts the global access to justice crisis and the shortcomings of conventional legal services that frequently leave businesses and individuals without adequate support. Furthermore, the thesis investigates innovative regulatory strategies for emerging technologies, aiming to synchronize legal frameworks with contemporary technological progress by exploring adaptive and forward-looking governance approaches.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164268</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Development Platforms and Creative Ecosystems: Design &amp; Deployment for Wide Impact Across Fields</title>
<link>https://hdl.handle.net/1721.1/164267</link>
<description>Modular Development Platforms and Creative Ecosystems: Design &amp; Deployment for Wide Impact Across Fields
Shtarbanov, Ali
Physical, digital, and conceptual tools and building blocks are fundamental enablers and accelerators of humanity’s progress in technology, science, medicine, art, and even in abstract fields like mathematics, philosophy, and social sciences. Hardware development platforms present a special class of tools and building blocks, facilitating and accelerating innovation, prototyping, and research. They drastically reduce prototyping time and complexity, improve efficiency for experts, democratize access to innovation, and even inspire entirely new ideas. This research investigates how to design, develop, and deploy development platforms in ways that maximize their real-world impact potential. It focuses not only on the technical and engineering aspects, but also on the complete ecosystem a platform needs in order to have impact, including community building, engagement with users and volunteers, content strategy, online presence, publicity, deployment, feedback loops modularity, financial viability, and symbiotic relationships.  A comprehensive Design &amp; Deployment Framework is introduced as a conceptual tool for creating high-impact platforms and creative ecosystems, recognizing and fostering the positive feedback loops that sustain them and that shape their evolution and growth. This framework is applied in the development and deployment of multiple novel platform and ecosystem projects, including FlowIO, SleeveIO, and ModiStrap, as well as the ecosystem SoftRobotics.IO. Those works have benefited thousands of people around the world, providing researchers, designers, and engineers with powerful, reconfigurable, modular enabling artifacts that streamline prototyping, accelerate research, and lower barriers in fields like soft robotics, haptics, assistive technology, shape-changing interfaces, interactive arts, and more. A multitude of research, art, and engineering projects made possible by FlowIO and SoftRobotics.IO are presented, as well as over a dozen case studies showcasing how other users across disciplines have adopted, utilized, and extended these systems to advance their own creative, educational, and technical endeavors. Additionally, this thesis also investigates various deployment models for hardware and introduces a new hardware deployment model for equitable access to expensive hardware that may otherwise be financially out of reach for many users, as well as an “earned open-source” model, which preserves the essence of the traditional open-source model, while eliminating many of its pitfalls.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164267</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches</title>
<link>https://hdl.handle.net/1721.1/164266</link>
<description>Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches
Justen, Lennart J.
Civilization confronts a growing challenge: advancing transformative biological science while safeguarding against catastrophic misuse, a tension amplified by the rapid convergence between biology and artificial intelligence. The COVID-19 pandemic starkly revealed our vulnerabilities to self-replicating, exponential biological phenomena, yet current defenses remain dangerously inadequate—often blind to novel pathogens until too late and lacking barriers against rapid airborne transmission. This thesis argues that robust biosecurity enables, rather than hinders, progress, and advances three key defensive capabilities. First, it evaluates blood metagenomics for pathogen-agnostic surveillance, reanalyzing public datasets to quantify viral signatures and guide the implementation of much-needed early-warning systems sensitive to novel pathogens. Second, it advances far-UVC, a type of ultraviolet between 200-235 nm, for continuous indoor air disinfection, critically assessing its safety profile through an international expert review and establishing research priorities essential for deploying this vital physical defense against airborne threats. Third, it develops rigorous methodologies for evaluating AI's rapidly evolving biological capabilities, benchmarking frontier models across diverse tasks to track progress, reveal limitations in current assessments, and guide responsible innovation in this powerful dual-use technology. Collectively, these contributions help accelerate technologies to mitigate biological risks, thereby helping secure the conditions for continued, beneficial advancement of biology in the age of AI.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164266</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies</title>
<link>https://hdl.handle.net/1721.1/164265</link>
<description>From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies
Poole-Dayan, Elinor
Deliberative assemblies—representative samples of citizens engaged in collective decision-making through facilitated learning and deliberation—are increasingly recognized as powerful tools for revitalizing democratic governance. Yet, core aspects of how deliberation shapes which ideas advance, how perspectives evolve, and why certain recommendations succeed remain opaque and underexamined. This thesis addresses these gaps by investigating: (1) How might we trace the evolution and distillation of ideas into concrete recommendations within deliberative assemblies? and (2) How does the deliberative process shape delegate perspectives and influence voting dynamics over the course of the assembly?&#13;
&#13;
&#13;
To answer these questions, I develop LLM-based methodologies for empirically analyzing transcripts from a tech-enhanced student deliberative assembly. The first framework identifies and visualizes the space of expressed suggestions, revealing that seemingly large gaps between ideas and final recommendations often reflect productive deliberative filtering—while also surfacing overlooked viable ideas.&#13;
A second analysis integrates post-assembly survey data with transcript-grounded voting patterns to uncover the primary drivers of vote change: edits to recommendations, evolving opinions, and strategic shifts in response to updated priorities. Building on this, I introduce a framework for reconstructing each delegate’s evolving stance across the assembly, linking shifts in perspective to specific deliberative moments and justifications.&#13;
&#13;
Together, these methods contribute novel empirical insight into deliberative processes and demonstrate how LLMs can surface high-resolution dynamics otherwise invisible in traditional assembly outputs. The findings lay groundwork for new tools that support facilitators and delegates during live assemblies, improve transparency for decision-makers, and elevate ideas that may otherwise be missed.&#13;
&#13;
Looking ahead, this work opens pathways for comparative research across assemblies and highlights the potential for human-centered AI to meaningfully enhance deliberative democratic practice. As societies seek new modes of participatory governance amid growing polarization and institutional mistrust, tools that strengthen deliberation without compromising its core human character are urgently needed.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164265</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Private, Verifiable, and Auditable AI Systems</title>
<link>https://hdl.handle.net/1721.1/164264</link>
<description>Private, Verifiable, and Auditable AI Systems
South, Tobin
The growing societal reliance on artificial intelligence necessitates robust frameworks for ensuring its security, accountability, and trustworthiness. This thesis addresses the complex interplay between privacy, verifiability, and auditability in modern AI, particularly in foundation models. It argues that technical solutions that integrate these elements are critical for responsible AI innovation. Drawing from international policy contributions and technical research to identify key risks in the AI pipeline, this work introduces novel technical solutions for critical privacy and verifiability challenges.  Specifically, the research introduces techniques for enabling verifiable and auditable claims about AI systems using zero-knowledge cryptography; utilizing secure multi-party computation and trusted execution environments for auditable, confidential deployment of large language models and information retrieval; and implementing enhanced delegation mechanisms, credentialing systems, and access controls to secure interactions with autonomous and multi-agent AI systems. Synthesizing these technical advancements, this dissertation presents a cohesive perspective on balancing privacy, verifiability, and auditability in foundation model-based AI systems, offering practical blueprints for system designers and informing policy discussions on AI safety and governance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164264</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Models as Mirrors and Bridges for Intergroup Communication</title>
<link>https://hdl.handle.net/1721.1/164263</link>
<description>Language Models as Mirrors and Bridges for Intergroup Communication
Jiang, Hang
This dissertation explores how large language models (LLMs) can serve dual roles in intergroup communication: as mirrors that reflect intergroup differences, and as bridges that facilitate communication across group boundaries. Intergroup communication refers to interactions between individuals from different social groups, such as political, cultural, or professional communities, where divergent perspectives often lead to misunderstandings, unequal access to information, and social fragmentation.&#13;
&#13;
The first part of the dissertation presents LLMs as mirrors that reveal intergroup differences. We first introduce CommunityLM, a novel framework for probing public opinion by fine-tuning LLMs on social media posts from specific communities. Our case study comparing Republican and Democratic groups reveals that model predictions align well with human survey responses, substantially outperforming established baselines. Building on this foundation, we develop PersonaLLM to investigate whether prompt-based LLM agents can generate content aligned with assigned personas, which has emerged as a popular approach for modeling the behaviors of social groups. Through automated and human evaluations, we demonstrate that these agents can complete personality tests and write stories that reflect the distinctive behavioral patterns of specific personality profiles. Together, these complementary projects illustrate how LLMs can effectively capture and simulate the unique perspectives and behaviors that characterize diverse social groups.&#13;
&#13;
The second part of the dissertation presents LLMs as bridges that facilitate communication across group boundaries. First, we introduce Bridging Dictionary, an interactive tool that uses retrieval-augmented generation (RAG) techniques with LLMs to identify polarized language and suggest more inclusive alternatives. In collaboration with PBS Frontline, we demonstrate the potential of LLMs to reduce misunderstanding in journalism and political communication. Second, we present Legal Storytelling, a human-LLM collaboration framework that generates accessible narratives to explain complex legal concepts to non-experts. Through randomized controlled trials (RCTs), we find that LLM-generated narratives can improve legal literacy and help bridge communication gaps between experts and laypeople, particularly among non-native English speakers. Third, we develop FaciliTrain, a voice-based, LLM-powered system that enables facilitators to learn and practice intergroup dialogue skills with multiple LLM agents representing diverse social backgrounds and personas in a small-group setting. User studies with campus participants show encouraging early results, suggesting that LLMs can effectively support the development of communication skills essential for constructive intergroup dialogue. Together, these projects illustrate how LLMs can actively foster mutual understanding across social divides by promoting more inclusive, accessible, and constructive communication.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164263</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies</title>
<link>https://hdl.handle.net/1721.1/164262</link>
<description>Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies
Wong, Wing Cheung Michael
With trust in traditional democratic institutions waning, it is increasingly important to examine how potential new institutions could be created and bolstered, with particular emphasis on restoring trust and empowering the public. One potential solution, the citizen's or deliberative assembly, can serve to bridge the governance and legitimacy gap between real-world policy decision-making processes and citizen-driven impact by leveraging random sortition and a well-designed deliberation process. In this thesis, I explore how AI-driven sensemaking via GPT4o-mini--a Large-Language Model (LLM)--synthesized with custom-built visualization tools, can potentially reveal the dynamics within citizen deliberative assemblies where representative, randomly selected citizens navigate public interest issues through facilitated deliberation--and how such tools can serve to amplify transparency within both the assembly process itself and the issues they explore. Through building three different prototype visualization frameworks and the development of an AI-powered topic identification process called backcasting, I analyze novel datasets from two tech-enhanced assemblies; fully recorded discussions from both an on-the-ground citizens' assembly in Deschutes County, Oregon, as well as an MIT student assembly on sustainability. In backcasting, assembly outcomes are linked to transcriptions of assembly discussions via LLM tagging, uncovering what, when, who, and where participants deliberate about topics that eventually become proposals/recommendations/outcomes. Furthermore, I analyze the sentiment with which an assembly delegate presented their view on a certain recommendation (agreement, disagreement, etc.) in addition to the supporting reasoning patterns this delegate used to express their view, if any (e.g. whether they draw from personal experience, reference outside expertise, etc.). To evaluate the final prototype tool, I interview subject matter and assembly experts, assembly organizers/facilitators, as well as assembly delegate members to assess the potential and drawbacks of this visualization tool and AI sensemaking backbone. Positive feedback obtained from these user studies include the clear potential for research, narrative building, and facilitation improvement, in addition to greater perceived transparency into the workings of an assembly process. Further work is still needed, however, to address significant lingering issues, such as adjusting presentation to better serve specific use cases and to reduce complexity and confusion, the most referenced drawback of Delibrary. Overall, my thesis aims to \textbf{build transparent insights into the human-led structures of assemblies, enabling relevant stakeholders--from delegates, policy makers, to the general public--to achieve a better understanding of the assembly process and engender legitimacy perception by illustrating that delegates drawn from all walks of life do have meaningful voice in an impactful process}. By helping to promote this understanding and perception of legitimacy of an effective and respectful deliberation process, I strive to ultimately help scaffold healthier democratic decision-making.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164262</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Facilitating Creative Learning: Engaging in a Practice of Care</title>
<link>https://hdl.handle.net/1721.1/164261</link>
<description>Facilitating Creative Learning: Engaging in a Practice of Care
Presicce, Carmelo
Creative learning is shaped not only by tools and activities, but by relationships. This dissertation explores facilitation in creative learning environments as a relational practice centered on care—not as a set of techniques, but as a deeply human way of being with others, a commitment to creating spaces where people feel supported enough to explore, connected enough to share, and valued enough to express themselves. Grounded in constructionist, socioconstructivist, and humanistic pedagogies, the research draws from my multi-year engagement with Learning Creative Learning (LCL)—an online course and global community for educators—and WeScratch, a series of hands-on, collaborative online workshops introducing educators to creative coding. Through qualitative analysis of small-group facilitation during WeScratch workshops, I explore how volunteer facilitators experience and reflect on their practice. Drawing from three case studies, I examine how care takes shape in the situated, relational work of creative learning facilitation. In particular, I identify three interrelated forms of care: epistemic care, which focuses on what and how people learn; affirming care, which supports what learners value and who they are; and convivial care, which attends to how learners feel and relate to one another in a group. After introducing these three forms of care through the work of individual facilitators, I show how epistemic, affirming, and convivial care are deeply interwoven in practice—at times reinforcing one another, at times pulling in different directions. Facilitators must navigate these tensions in the moment, making situated judgments about when to step in, when to hold back, and how to respond to the evolving needs of individuals and groups. By centering care, this research highlights facilitation as deeply human, relational work that sustains the conditions for creative learning, contributing to the broader and evolving discourse on constructionism. It also makes the case for seeing facilitation as an ethical and political practice. In a time when educational discourse is increasingly shaped by ideals of efficiency and optimization—and the world faces rising authoritarianism and dehumanization—choosing to care is not only pedagogically meaningful, but also politically urgent.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164261</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Earth Abundant Catalytic Materials for Abatement of Atmospheric Methane Sources, and Evaluation of Agricultural Deployment Environments</title>
<link>https://hdl.handle.net/1721.1/164260</link>
<description>Novel Earth Abundant Catalytic Materials for Abatement of Atmospheric Methane Sources, and Evaluation of Agricultural Deployment Environments
Brenneis, Rebecca J.
Annual global average temperatures in the past year have already exceeded the international target limit of 1.5°C, and the window to prevent that rise from extending is rapidly closing. The high global warming potential (GWP) and short atmospheric residence time (half-life of around 12 years) of methane make it a critical target for action to slow the pace of climate change in this decade. Yet technological solutions for methane abatement are challenged by methane’s inertness, dilute atmospheric concentrations, and diffuse, variable emissions sources. In this thesis, I propose the use of a bio-inspired, earth-abundant, heterogeneous catalysts as a novel tool for atmospheric and emissions-based methane abatement. Copper zeolites were characterized for their ability to convert low levels of methane, continuously, at low temperatures, for moderate durations, and in the presence of a variety of gaseous mixture influents, designed to mimic atmospheric air at standard temperatures and pressures. Catalytic performance was tested under conditions designed to mimic those found at two of the primary sources of low-level, anthropogenic emissions: ventilation air methane (VAM) and industrial dairy. Laboratory synthesized catalysts were shown to completely oxidize methane at concentrations ranging from atmospheric to 1%, covering the range of subflarable levels. Conversion was demonstrated at temperature as low a 270°C, with complete conversion achievable as low as 350°C, in the presence of 20% oxygen. While the presence of water vapor, nitric oxide, and hydrogen sulfide were shown to partially reduce catalytic efficiency, conversion efficiency was restored with increased temperature. The presence of carbon dioxide, alkanes, ammonia and hydrogen, at industrially relevant concentrations, had no effect on catalytic performance. Finally, atmospheric samples were collected at six industrial scale dairy barns across the Midwest and compared with the simulated laboratory conditions. Dairy samples fell within the ranges tested at the bench scale showing no evidence of any impediment to copper zeolite as a potential abatement tool. Methane concentrations at dairies were shown to be on the order of atmospheric to low 10s of ppmv making copper zeolites the only currently identified abatement strategy to address methane emissions at these locations. While it remains to be shown that these zeolites can provide net greenhouse gas benefit in the conditions required, copper zeolites are a strong option on a short list of technologies to address methane at any subflarable concentration, sources of which comprise 80% of global emissions sources, showing great promise as a climate technology breakthrough.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164260</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>To Co- Is Human: Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI</title>
<link>https://hdl.handle.net/1721.1/164259</link>
<description>To Co- Is Human: Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI
Dhariwal, Shruti
In an era where Artificial Intelligence (AI) systems are increasingly framed as our “companions” and “co-creators,” this dissertation reclaims “co-” as a fundamental marker of shared human experience—using it as a foundation to reimagine and build technologies that consciously center interhuman connection and co-creativity. Central to this work, we’ve developed CoCo (coco.build)—a general-purpose, real-time co-creative learning platform that empowers young people to engage in a wide variety of safe, shared creative experiences with their peers, spanning creative computing, AI education, digital art, writing, and more. Through the platform, we showcase how digital environments can move beyond isolated modes of learning and creating to support multiple ways of being creative together with others—introducing a new paradigm for real-time digital collaboration. We further illuminate how CoCo has been envisioned as a “self-less” social platform that de-emphasizes comparison-based, self-centric metrics (profiles, likes, followers) prevalent in most online systems for youth. We anchor these interconnected ideas in a unifying theme of “Being. Creative. Together.”—reflecting timeless values that have become especially timely in an era when AI tools can further accentuate individualized digital experiences for young people. We supplement the broader design, technical, practical, and pedagogical contributions of this work by sharing insights and feedback from pilots with over 2,000 young people and educators across diverse settings. Ultimately, we see this dissertation as both a contribution and a call—to preserve the human essence of co-, to distinguish it from the useful, powerful, but instrumental AI interactions, and to shape digital environments that nurture young people’s capacity to co-imagine, co-create, co-learn, co-exist, and co-evolve—with and through one another. &#13;
&#13;
Note: This work has been co-developed with Manuj Dhariwal. See https://coco.build/thesis for suggested citation and updates on this work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164259</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Storybooks for Early AI Literacy</title>
<link>https://hdl.handle.net/1721.1/164170</link>
<description>Interactive Storybooks for Early AI Literacy
Pu, Isabella
As artificial intelligence (AI) becomes increasingly present in children's everyday environments, there is an urgent need for developmentally appropriate tools that help young learners understand and shape these technologies. To be effective, these tools must not only successfully convey complex concepts but also engage children in ways that are meaningful, accessible, and fun.&#13;
&#13;
This thesis introduces the Interactive Storybooks for Early AI Literacy, a series of ten interactive storybooks for children ages 6–9 that combine narrative, mini-games, and scaffolded creative AI interactions to teach core AI and robotics concepts. The storybooks follow an overarching narrative featuring a friendly robot, Doodlebot, who must learn creative tasks with the child's help, framing the child as an AI designer and introducing them to the concept of training AI models through the narrative. The storybooks additionally contain interactive games and activities which help keep kids excited and engaged, while providing structured opportunities to experiment with and explore AI creation tools.&#13;
&#13;
First, a pilot study was conducted at a community summer camp with four Interactive Storybooks. Children expressed joy and pride in their AI creations, used the characters as emotional anchors for learning, and began to successfully articulate key AI concepts. Four engagement archetypes emerged: the Reader, the Gamer, the Showcaser, and the Social Connector, each representing a distinct way children interacted with the storybooks. However, despite behavioral signs of engagement, many children described the narrative portions as boring and claimed to prefer games.&#13;
&#13;
To explore this tension, a home deployment study compared two versions of the system: a "Books" condition with the full narrative and a "Games" condition with only instructional text. Both conditions included the same mini-games and AI interactions. While children in both groups reported similar levels of enjoyment, those in the Books condition showed significantly higher learning gains, greater increases in perceived knowledge and confidence, and stronger connections to the characters. Children in the Books condition also more frequently referenced the narrative when describing AI concepts and demonstrated more creative and iterative behavior during and after gameplay.&#13;
&#13;
Overall, these findings suggest that combining storytelling, gameplay, and creative AI interactions is an effective and engaging approach to teaching AI and robotics to young children. Narrative context appears to support concept recall, deepen emotional investment, and promote thoughtful experimentation, even with complex concepts for this age group, like AI and robotics. Based on insights from both studies, this thesis concludes with six design recommendations for creating developmentally appropriate, emotionally resonant AI education tools for early learners using narrative and play.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164170</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized Machine Learning over Fragmented Data</title>
<link>https://hdl.handle.net/1721.1/164169</link>
<description>Decentralized Machine Learning over Fragmented Data
Singh, Abhishek
The remarkable scaling of data and computation has unlocked unprecedented capabilities in text and image generation, raising the question: Why hasn’t healthcare seen similar breakthroughs? This disparity stems primarily from healthcare data being fragmented across thousands of institutions, each safeguarding patient records in regulatory-compliant silos. The problem is not limited to healthcare but extends to other industries with fragmented data across institutions and individuals. Instead of centralizing various datasets to solve the fragmentation problem, which raises regulatory and ethical concerns, this thesis proposes systems and algorithms to decentralize the machine learning pipeline. Current approaches in this area have centered around Federated Learning (FL), which enables model training over distributed data. However, FL’s dependence on central coordination and inflexibility with heterogeneous systems limit its applicability in healthcare settings. Motivated by these challenges, I explore the following three core themes:&#13;
&#13;
1) Coordination – Today’s coordination algorithms typically rely on static rules or randomized communication, approaches that turn out to be sub-optimal when data heterogeneity is high. I present a new system and a benchmark framework that enables systematic assessment of different coordination algorithms. Next, I propose an adaptive coordination algorithm that leverages historical performance and learning dynamics to improve coordination.&#13;
&#13;
2) Heterogeneity – Data owners can vary significantly in their data distributions, computational resources, and privacy requirements. To address this heterogeneity, I turn the focus from the traditionally protected training phase to securing the critical inference process. Next, I develop techniques for distributed training that adapt to heterogeneous computational capabilities across different agents.&#13;
&#13;
3) Scalability – Enabling scaling in decentralized ML requires addressing three key challenges: parallelization, synchronization, and self-scaling. While parallelization has advanced significantly, the other two remain challenging. I present a framework for offline collaboration through sanitized, synthetic datasets that eliminates constant synchronization needs while preserving privacy.&#13;
&#13;
This thesis identifies and addresses some of the bottlenecks along these three core themes through a complementary set of solutions: adaptive coordination, heterogeneity-aware training, and scalable collaboration. Together, these building blocks can enable a practical framework for unlocking data silos across institutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164169</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between spatial structure and competition in ecological communities</title>
<link>https://hdl.handle.net/1721.1/164168</link>
<description>Interplay between spatial structure and competition in ecological communities
Swartz, Daniel W.
Ecology, much like physics, has a long history of theoretical contribution. In this thesis, we take a physics approach to describing ecological communities, searching for simple, emergent features that can generalize beyond specific models of community dynamics. Unifying all of the models we study is an underlying spatial structure, leading to a richer set of possible behaviors than a typical well-mixed model. We first study the case of a metapopulation, a collection of smaller communities linked by dispersal. We find that when the environment is allowed to fluctuate stochastically, new growth laws emerge at the single species level, and high diversity is achieved in the case with many species. We then study the case of pathogen evolution, again in the metapopulation framework. We find that intermediate dispersal can act as a strong driver of pathogen evolution. We also study what happens as a population of microbes expands into unexplored territory, known as a range expansion. We find that a simple model can capture all morphological phases observed in experiments and predict invasion fitness as a function of local and global competitive ability. We also break a standard assumption in microbial ecology, the isotropy of space, and find that a new sector morphology emerges.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164168</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Algorithmic Cookbook of Quantum Science: Quantum and Classical Recipes for Computation</title>
<link>https://hdl.handle.net/1721.1/164167</link>
<description>The Algorithmic Cookbook of Quantum Science: Quantum and Classical Recipes for Computation
Martyn, John Michael
Since the dawn of science, computation and physics have evolved alongside each other, both driven by a shared quest to solve problems and calculate properties of the natural world. Today, this symbiotic relationship is epitomized in quantum information science, which proposes to use quantum mechanics to solve hard computational problems and develop new paradigms of communication and cryptography. Yet often absent from these developments is a clear, human-interpretable understanding, with many quantum protocols built from inherently quantum concepts (e.g., entanglement, superposition) that defy our classical line of thought and muddle the search for efficient quantum algorithms. Here we show that this search need not be so opaque: simple mathematical tools, namely polynomials and their fundamental theorems, in unison with concepts from classical computing, provide a powerful framework for the design of quantum algorithms. We develop this framework and use it to construct an assortment of quantum algorithms, including methods for quantum simulation, parallel computing, randomized algorithms, and continuousvariable quantum hardware. In illuminating this framework, we find a striking bidirectional flow: just as classical concepts inspire new quantum algorithms, so too can quantum mechanical insights bring about novel methods of classical computing. In this reverse direction, we adopt inherently quantum concepts, such as random compilation and bosonic symmetry, to develop new classical methods, with applications in simulating quantum systems and designing robust neural networks. In aggregate, this thesis provides a compendium of algorithmic techniques for probing quantum systems and solving hard problems, using both quantum and classical tools—an “algorithmic cookbook”—predicated on deep connections between these two domains. The recipes presented here aim to demystify black boxes of quantum information science, and provide a valuable resource for future developments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164167</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Death of Quasiparticles: Strongly Interacting Gapless Phases&#13;
with Fermi Surfaces and Fractional Statistics</title>
<link>https://hdl.handle.net/1721.1/164166</link>
<description>The Death of Quasiparticles: Strongly Interacting Gapless Phases&#13;
with Fermi Surfaces and Fractional Statistics
Shi, Zhengyan
The emergence of quasiparticles at low temperature provides a powerful organizing principle for many quantum phases of matter, ranging from conventional magnets and superconductors to exotic insulators with topological order. In this thesis, I describe my research in gapless quantum phases in which the framework of quasiparticles breaks down. The main characters are two categories of gapless phases that feature the interplay between strong interactions and two additional ingredients – Fermi surfaces and fractional statistics. Chapter 2 through Chapter 5 focus on strongly interacting metals with Fermi surfaces. The most salient examples are a class of Hertz-Millis models describing the onset of spontaneous symmetry breaking in a metallic environment. At the quantum critical point, gapless order parameter fluctuations destroy quasiparticles living on the Fermi surface, giving rise to a strongly coupled non-Fermi liquid metal. A key result of these chapters is the identification of an infinite-dimensional symmetry that survives in these non-Fermi liquid metals despite the death of quasiparticles. This infinite-dimensional symmetry and its quantum anomaly lead to a series of non-perturbative results on thermodynamics and transport, which are confirmed by perturbative diagrammatic calculations in special examples. Chapter 6 through Chapter 8 explore quantum phases in which anyonic quasiparticles with fractional statistics play an essential role. When parameters in the system are tuned to close the anyon energy gaps, the original anyons lose their coherence and a variety of novel phases emerge. A highlight in this direction is a new mechanism for topological superconductivity in itinerant abelian and non-abelian anyon fluids, which could make contact with experiments on doped fractional quantum anomalous Hall states in the near future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164166</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Particles Inside Particles: The Flow of Energy in Quarks, Gluons, and Jets</title>
<link>https://hdl.handle.net/1721.1/164165</link>
<description>Particles Inside Particles: The Flow of Energy in Quarks, Gluons, and Jets
Alipour-fard, Samuel
This thesis presents the author’s work in developing probes of the inner structure of jets in high-energy particle collisions. We begin by introducing QCD and the scattering of partons (quarks and gluons), discussing jets as theoretical and experimental proxies for partonic physics, and presenting the partonic cascade model of jet formation and jet substructure. Noting the ubiquitous presence of low-energy pollution in particle collision events, in the forms of hadronization, detector effects, the underlying event (UE), and pileup (PU), we then move towards the modern research area of developing pollution-insensitive probes of jet substructure. Pollution-insensitive features of jet substructure are often accessed theoretically either through jet grooming or energyweighted correlation functions. We present the basics of the modern theory of jet grooming as well as the work of the author in developing the Piranha paradigm for continuous jet grooming, introduced by the author in Ref. [1], and explore the formal and phenomenological benefits of continuous grooming techniques as pollutioninsensitive probes of jet substructure. We introduce the basics of the simplest energy-weighted correlation function – the energy-energy correlator (EEC), which probes angular correlations between particle pairs – and discuss its multi-particle analogues. We focus on the efficient and visually intuitive projected and resolved energy correlators introduced by the author in Ref. [2], which provide computationally-realistic, pollution-insensitive probes of angular many-body correlations in QCD jets. Finally, we exposit the generic theory of energy-weighted observable correlations (EWOCs), introduced by the author in Ref. [3], which utilizes the energy weighting of the EEC to provide pollution-insensitive probes of non-angular correlations within jets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164165</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-state cavity quantum electrodynamics with spin ensembles</title>
<link>https://hdl.handle.net/1721.1/164164</link>
<description>Solid-state cavity quantum electrodynamics with spin ensembles
Wang, Hanfeng
Quantum sensors have the potential to operate at fundamental physical performance limits. Among various quantum sensing platforms, solid-state spin emitters stand out due to advantageous characteristics such as room-temperature spin polarization and readout, atomic-scale spatial resolution, and extended coherence times. Despite these strengths, traditional optical detection methods exhibit low readout fidelity in solid-state ensembles, severely limiting their achievable sensitivity. This thesis addresses this limitation by coupling a solid-state emitter ensemble to a microwave cavity, forming a cavity quantum electrodynamics system. Our approach eliminates the need for photon collection required by conventional optical readout methods, and the resulting strongly coupled system allows efficient cavity-based probing of the solid-state spin ensemble. By exploiting the hybrid quantum system with cavity quantum electrodynamics, we achieve record-high sensitivity for solid-state quantum sensors, representing a substantial advancement toward achieving fundamental sensing limits.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164164</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding the Phase Space of Photons in Matter: From High-Throughput Screening to Atom-by-Atom Engineering</title>
<link>https://hdl.handle.net/1721.1/164163</link>
<description>Expanding the Phase Space of Photons in Matter: From High-Throughput Screening to Atom-by-Atom Engineering
Ghorashi, Ali
Focusing on the topological band properties of photonic crystals and the plasmonic properties of two-dimensional metals, we seek to answer the question: what is the phase space of photons in matter? For topology, what are the physical parameters that determine whether a given photonic crystal band hosts Dirac points, a non-zero Chern number, or topologically protected corner states? And for plasmons, what are the experimentally addressable ranges of plasmonic dispersions, phase velocities, confinements, and losses? In particular, is it possible to engineer the elusive lossless plasmon? Using high-throughput screening, artificial intelligence, and atom-by-atom engineering through density functional theory, we determine the topological prevalence of photonic bands, propose two systems that evade plasmonic losses through the electron-phonon interaction, and (re)discover general physical laws that govern the geometries of photonic eigenstates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164163</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Sample Efficiency of Data-Driven Decision Making</title>
<link>https://hdl.handle.net/1721.1/164162</link>
<description>On the Sample Efficiency of Data-Driven Decision Making
Qian, Jian
This thesis studies the fundamental problem of decision making under uncertainty through the lens of statistical decision theory. We characterize the minimax risk, which captures the sample efficiency required for effective decision making across three key settings: offline estimation with batch data, online estimation with sequential data, and interactive decision making as exemplified by multi-armed bandits and reinforcement learning. The first part of the thesis develops novel algorithmic and theoretical tools to enhance decision making in these regimes and to bridge the gaps between them. We revisit logistic regression in the offline setting and provide guarantees without restrictive boundedness assumptions. We then propose meta-algorithms that reduce online estimation to offline estimation, enabling any offline estimator to be used effectively in online scenarios. Furthermore, we present general-purpose algorithms for interactive decision making problems by leveraging offline or online estimation techniques. The second part of the thesis introduces a unified approach to understanding the fundamental complexity of interactive decision making. We propose the Decision Making with Structured Observation (DMSO) framework, which encompasses bandits, reinforcement learning, and more general settings. Within this framework, we develop a new complexity measure—the Decision-Estimation Coefficient (DEC)—which captures both upper and lower bounds for minimax regret. DEC extends classical notions such as the modulus of continuity to interactive scenarios by introducing an adaptive variant of Le Cam’s method. Finally, we unify the three classical lower bound techniques—Le Cam’s method, Assouad’s lemma, and Fano’s inequality—through a generalized formulation that also incorporates the DEC, offering a comprehensive understanding of the minimax risk in decision making tasks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164162</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards achieving power autonomy in soft-actuated micro aerial robots</title>
<link>https://hdl.handle.net/1721.1/164161</link>
<description>Towards achieving power autonomy in soft-actuated micro aerial robots
Ren, Zhijian
Micro aerial robots with insect-like flight capabilities hold immense promise for various applications, including environmental monitoring, precision agriculture, and infrastructure inspection in confined spaces. However, realizing power autonomy in these miniature robotic platforms presents significant challenges due to weight constraints, power density limitations, and inefficient actuation at small scales. This dissertation presents three essential improvements towards achieving power autonomy in soft-actuated micro aerial robots. Our robotic platform is driven by a dielectric elastomer actuator (DEA) and generates lift force through flapping wings, a similar mechanism found in flying insects. First, we implemented a dynamic model to optimize the robot components for pairing with an improved DEA to generate a higher lift force. The robot achieved a peak lift-to-weight ratio of 4.3 and demonstrated a 20-second hovering flight with position and attitude errors smaller than 2.5 cm and 2◦ . Second, we fabricated a lightweight high-voltage boost converter that transformed a 7 V DC input into an AC waveform of 600 V and 400 Hz to drive the actuator. This is the first onboard boost converter that can drive the soft-actuated micro aerial robot to take off, and it represents a substantial achievement in miniaturizing power electronics for microrobots. Third, we took inspiration from the natural autorotation of maple seeds in their slow descent. We implemented the first samara-inspired mechanism on micro aerial robots, enhancing lift generation while maintaining in-flight attitude stability without feedback control. The 1.22-gram vehicle can stably take off in 1 second with a total input thrust of 1 gram-force. These accomplishments provide a pathway towards achieving power autonomy and open opportunities for developing agile, robust, and autonomous micro aerial robots for diverse applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164161</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Energy Electron-Photon Interactions in a Scanning Electron Microscope</title>
<link>https://hdl.handle.net/1721.1/164160</link>
<description>Low-Energy Electron-Photon Interactions in a Scanning Electron Microscope
Simonaitis, John
The interaction of free-electrons with matter and light is among the most fundamental of processes in nature. From the use of free-electrons for atomic imaging,  to their use in the generation of high-intensity, tunable light in synchrotrons, the physics of unconfined electrons has wide application. In recent years, there has been a new focus on the quantum nature of individual electrons in electron microscopes to enable further improvements in these technologies. This work takes advantage of developments in ultrafast optics, electron spectroscopy, quantum optics, and nanofabrication to explore various electron-electron, electron-photon, and electron-material interactions. In this thesis, we construct a low-energy, ultrafast scanning electron microscope,  using it to explore quantum coherent interactions between electrons, light, and matter.&#13;
&#13;
In Chapter 1, we review the history of free electron experiments and how advances in nanofabrication, low-dimensional materials, and ultrafast optics have opened new opportunities for electron-light interactions to a degree not previously possible. In Chapter 2 we discuss experimental forms of quantum electron microscopy known as interaction-free measurement and electron multi-passing. Chapter 3 details a general theory of electron-photon interactions, including simulations with quantum two-level systems and extended optical nanostructures. In Chapter 4, we design and construct a second microscope with ultrafast triggering, an electron spectrometer with sub-eV resolution, nanostructured interaction regions, and active beam alignment. Chapter 5 explores various experimental results, demonstrating enhanced loss spectroscopy of 2D materials, energy resolution of gold nanoparticle plasmons, as well as spectroscopy of time-tagged cathodoluminescence from optical fibers.  Finally, in chapter 6 we discuss future perspectives of this approach, analyzing the impact a heralded electron source would have on electron microscopy and lithography.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164160</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of Jet Modification in Heavy Ion Collisions with the CMS Experiment</title>
<link>https://hdl.handle.net/1721.1/164159</link>
<description>Studies of Jet Modification in Heavy Ion Collisions with the CMS Experiment
Park, Mary Isabelle
In the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC), lead ions are collided at ultra-relativistic velocities to produce Quark-Gluon Plasma (QGP), a state of matter where quarks and gluons are deconfined and move collectively. Jets are produced in high-momentum transfer parton scatterings prior to and independently of QGP formation, and serve as natural probes of its properties. As the high-energy partons pass through the QGP, they lose energy through medium-induced gluon radiation and elastic scattering, resulting in jets that are modified with respect to the vacuum baseline. In this thesis, jet modification is quantified by measuring the jet production cross section as a function of jet radius in inclusive jets and the jet axis decorrelation in jets recoiling from isolated photons in Lead-Lead (PbPb) and Proton-Proton (pp) collisions. Both measurements indicate that effects of medium-induced jet broadening may be balanced by survivor bias in PbPb collisions, potentially due to differences in the magnitude of quenching of wide versus narrow jets. The results underline the importance of constraining the initial jet kinematics with bosons, which are unmodified by the QGP.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164159</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Drivers of Stratospheric Ozone Change and Fingerprinting its Recovery</title>
<link>https://hdl.handle.net/1721.1/164158</link>
<description>Understanding Drivers of Stratospheric Ozone Change and Fingerprinting its Recovery
Wang, Peidong
Stratospheric ozone serves as Earth’s natural protective layer, shielding the surface from harmful ultraviolet radiation. The discovery of the Antarctic ozone “hole” in the late 1980s raised significant societal and scientific concern, prompting the rapid regulation of ozonedepleting substances (ODSs) under international treaties. While the signs of ozone recovery have begun, new challenges continue to arise. This thesis investigates three critical factors driving stratospheric ozone changes and influencing the detection of ozone recovery: (1) ODS emissions, (2) chemical chlorine processes, and (3) internal climate variability. With ODS emissions being regulated under the Montreal Protocol and studies now focusing on illicit new production on the order of tens of gigagrams per year, the ocean’s role as both a natural source and sink of ODSs becomes increasingly important. However, these processes have often been overlooked or highly simplified in past ozone assessments. Using a hierarchy of models, from simple box models to global ocean general circulation models, I quantified the ocean’s uptake and release of various ODSs. Chapter 2 examines the ocean’s uptake of chlorofluorocarbons (CFCs), particularly emphasizing its influence on recent illicit CFC emissions estimation. Chapter 3 extends this analysis to include ocean uptake and potential microbial degradation processes, evaluating their effects on emission estimates for various hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs), which are chemical constituents that have been used to replace CFCs. Once these man-made ODSs reach the stratosphere, they are photolyzed to chlorine reservoir species (e.g., HCl and ClONO2), which, through heterogeneous reactions, can transform into reactive chlorine that depletes ozone. While heterogeneous chlorine activation on volcanic ash is well understood, the unprecedented 2020 Australian wildfires raised new questions about chemical processes on smoke particles. This knowledge gap existed because only a few wildfires had injected significant amounts of smoke particles into the stratosphere during the satellite era. Leveraging over 30 years of satellite data, I separated chemical and dynamic processes affecting chlorine reservoir species to quantify chemical chlorine activation across different aerosol types. In Chapter 4, I developed a new approach to quantitatively estimate the onset temperature for chemical chlorine activation after the 2020 Australian wildfire using satellite observations. Chapter 5 applies this method to compare the impact of chemical chlorine activation from two independent wildfire events with that from a series of volcanic eruptions of varying magnitudes. Despite emerging challenges such as illicit emissions and recent wildfires and volcanic eruptions, advancements in observational records, our understanding of ozone chemistry, and computational power have significantly enhanced our ability to quantitatively detect and attribute stratospheric ozone changes. In Chapter 6, I applied a pattern-based “fingerprinting” technique to quantitatively separate the contributions of ODS forcing from other external forcings and internal variabilities in satellite observations. This analysis shows that Antarctic ozone increases cannot be explained by climate internal variability alone, providing strong confidence that ozone recovery is underway, primarily driven by human efforts to reduce ODS emissions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164158</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light-Induced Collective Interactions in Arrays of Quantum Emitters</title>
<link>https://hdl.handle.net/1721.1/164157</link>
<description>Light-Induced Collective Interactions in Arrays of Quantum Emitters
Rubies-Bigorda, Oriol
The interaction between light and matter has captivated physicists for centuries, from early studies of vision and refraction in ancient Greece to the development of quantum mechanics and quantum electrodynamics in the past century. While the response of a single quantum emitter to light is well understood, the radiative properties of an ensemble of closely spaced emitters are far more intricate. Coupling to a shared electromagnetic environment induces coherent and dissipative interactions between emitters, giving rise to a collective response that cannot be captured by treating them independently. In the regime of few excitations, the system hosts delocalized subradiant states, that is, coherent superpositions that are largely decoupled from the electromagnetic field and thus decay at suppressed rates. While this weak coupling makes subradiant states attractive for quantum technologies, it also renders them difficult to manipulate. At higher excitation densities, the intrinsic nonlinearity of emitters and the exponential growth of the Hilbert space make theoretical and numerical descriptions of the system and its dynamics increasingly challenging. This thesis explores two fundamental questions: How can subradiant and dark states be selectively accessed and harnessed for practical applications in quantum technologies? And how can interacting ensembles of quantum emitters be efficiently simulated to uncover their many-body physics? The first part of the thesis develops protocols for controlling and addressing dark states in free-space and waveguide-coupled atomic arrays, demonstrating their utility in quantum storage and the deterministic generation of entangled photonic states. Incorporating atomic motion, we further show that collective subradiant states can enhance cooling in dense atomic arrays, offering new avenues for controlling motional dynamics. In the second part, we introduce cumulant expansions of the equations of motion as a powerful tool to analytically and numerically investigate collective decay in the many-body regime. We first examine the collective decay of fully excited atomic arrays in free space, characterizing the onset and scaling of the superradiant burst across different geometries. In collaboration with experiments on ultracold erbium atoms in optical lattices, we provide the first direct observations of many-body collective effects in free-space ordered arrays, including early-time superradiant bursts, late-time subradiant tails, and the emergence of atomic correlations throughout the dynamics. Finally, we theoretically and numerically explore the transient formation of multi-excitation subradiant states, and demonstrate how the existence of multiple dissipation channels suppresses steady-state superradiance in extended arrays.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164157</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Core Inductors for High Saturation Capability</title>
<link>https://hdl.handle.net/1721.1/164156</link>
<description>Hybrid Core Inductors for High Saturation Capability
Yang, Rachel S.
Power electronics are critical for any system requiring electricity and often impact the performance of these systems. In many cases, the performance of power electronics is limited by lossy and large inductors that are constrained by the saturation of their magnetic core material. Such saturation-limited inductors are typically found in power electronics applications where the inductor sees large dc current with relatively small ac ripple, such as EMI filters or converters operating in continuous conduction mode. This thesis investigates two types of inductor designs that can achieve higher saturation capability by combining multiple materials in a single core, enabling these designs to achieve greater energy storage or lower loss than conventional single-material cores. The first design combines a permanent magnet with a soft magnetic material (e.g. ferrite) to form a PM hybrid core. This core achieves higher saturation capability by directing PM flux to oppose winding flux in the ferrite. First-order models, design processes, and other theory for the PM hybrid core are developed in this thesis, and different geometries for this core are explored. Additionally, two PM hybrid core prototypes are presented, one using a pot core geometry and one using a modified E core geometry. The PM hybrid pot core prototype achieves 70% more energy storage or 50% of the dc loss versus comparable ferrite prototypes, while the PM hybrid E core prototype achieves 30% more energy storage or a minimum of 52% of the total loss versus comparable ferrite prototypes. The second design pairs a low-frequency, high-saturation material (e.g. steel) with a low-saturation, highfrequency material (e.g. ferrite) to form a steel hybrid core. This core achieves higher saturation capability by directing most of the dc flux to the steel and all of the ac flux to the ferrite, enabling the core to leverage both materials’ advantages while avoiding their detriments. First-order models and design processes for the steel hybrid core are developed in this thesis. An example steel hybrid core design using a pot core is also presented. This design can achieve 220% more energy storage versus a comparable ferrite prototype, and it may achieve lower loss. Its performance, though, is sensitive to manufacturing and assembly imperfections. In this thesis, both the PM hybrid and steel hybrid cores are demonstrated to have great potential in achieving high saturation capability. By leveraging these hybrid cores, inductor designs can achieve greater energy storage density or lower loss and thus enable higher performance power electronics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164156</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Gas Microscopy of Bosonic Correlations in the Continuum</title>
<link>https://hdl.handle.net/1721.1/164155</link>
<description>Quantum Gas Microscopy of Bosonic Correlations in the Continuum
Xiang, Jinggang
This thesis details the complete upgrade and renovation of an existing experimental platform into a high-resolution quantum gas microscope for ultracold 87Rb atoms. Quantum gas microscopes enable site-resolved imaging, providing unprecedented access to quantum statistical effects and many-body phenomena. While such instruments are often employed to study physics in optical lattices, we have innovatively adapted our apparatus to investigate bulk system behavior. A major part of this project involved upgrading the scientific apparatus and retrofitting the previous system. We introduced new optical components, including a high-NA objective, and improved the vacuum system for better optical access. Extensive lab renovations, from upgrading the optical table to reorganizing the laser and imaging setups, were carried out to enhance mechanical and thermal stability. Rigorous optical benchmarking confirmed that the objective achieves diffractionlimited imaging, which is critical for resolving single atoms. This capability allowed us to detect density fluctuations at the scale of the thermal de Broglie wavelength in a quasi-two-dimensional gas of 87Rb atoms. In an experiment resembling Hanbury Brown and Twiss interferometry, we measured a 30% enhancement in the second-order correlation function in situ, demonstrating strong bosonic bunching. This outcome underscores the microscope’s precision and the importance of high-resolution imaging in capturing subtle quantum statistical effects. The successful realization of this apparatus demonstrates the utility of quantum gas microscopes in probing bulk systems. With this new platform in place, future studies can explore critical phenomena, many-body correlations, matter-wave emission, and quantum simulations with cold atoms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164155</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of Cosmic Ray Lithium Isotopes Using the&#13;
Alpha Magnetic Spectrometer</title>
<link>https://hdl.handle.net/1721.1/164154</link>
<description>Measurement of Cosmic Ray Lithium Isotopes Using the&#13;
Alpha Magnetic Spectrometer
LaVecchia, Gianni
The study of cosmic rays and their properties provides insight into the origins of our universe and is a unique lens on the nuclear physics of the cosmos. The identification of cosmic ray isotopes poses a particular challenge, as it requires the measurement of multiple observables to a high degree of accuracy for the deduction of nuclear mass. Using the unique detection capabilities of the Alpha Magnetic Spectrometer (AMS), the isotope fluxes of cosmic ray lithium in the rigidity range of 1.92 to 25 GV are presented. This work is based on 0.97 million ⁶Li and 1.04 million ⁷Li nuclei collected by the AMS over a 12.5 year period, and improves the error and extent of existing measurements by a factor of 10. These results lead to the conclusion that there is no sizable primary component in cosmic ray ⁷Li. The&#13;
improvements to the AMS velocity measurement establishes the groundwork for future cosmic ray isotope measurements.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164154</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metrics, Muons, Moments, Models, Machine Learning, Measurements, and More: A Manifesto on Collider Physics</title>
<link>https://hdl.handle.net/1721.1/164153</link>
<description>Metrics, Muons, Moments, Models, Machine Learning, Measurements, and More: A Manifesto on Collider Physics
Gambhir, Rikab
The interface between particle theory and particle experiments is essential to improving our understanding of the Standard Model and looking for new physics beyond it. At this interface lies a complicated web of complex and expensive simulations that cannot fully be trusted, experimental and theoretical uncertainties, overwhelmingly large amounts of data, all while we have yet to find any deviations from the Standard Model.&#13;
&#13;
In this thesis, we propose strategies for improving the theory ↔ experiment pipeline at all stages. We first show how modern Machine Learning and statistical techniques can be used to improve the calibration and resolution of particle detectors in a robust way, which can lead to improved measurement precision. We then develop brand new classes of measurable observables based on the principle of infrared-and-collinear-safety, geometry, and machine learning, which come with guarantees about their theoretical calculability and interpretability, in turn motivating measurements at collider experiments. Finally, we then present two complementary approaches to search for new physics: one, in the form of an experimental proposal for a muon beam dump experiment that is viable alongside a full future collider program; and the other, in the form of machine-learning based anomaly detection to search for subtle signals in already-published data.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164153</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Algorithms for Modeling Causality to Accelerate Scientific Discovery</title>
<link>https://hdl.handle.net/1721.1/164152</link>
<description>Practical Algorithms for Modeling Causality to Accelerate Scientific Discovery
Wu, Menghua
Scientific research revolves around the discovery and validation of causal relationships between variables. Machine learning has the potential to increase the efficiency of this process by proposing novel hypotheses from data observations, or by designing experiments that maximize success rate. This thesis addresses these problems through pragmatic approaches, designed to model large systems and incorporate rich domain knowledge. These algorithms are applied to use cases in molecular biology and drug discovery, which highlight their potential to inform efficient experiment design and to automate the analysis of experimental results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164152</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recycling and Regeneration of Spent Perfusion Media via Ion&#13;
Concentration Polarization</title>
<link>https://hdl.handle.net/1721.1/164151</link>
<description>Recycling and Regeneration of Spent Perfusion Media via Ion&#13;
Concentration Polarization
Wynne, Eric Michael
The widespread adoption of monoclonal antibody therapies is often constrained by their high prices, which can limit accessibility, particularly for patients in low- and middle-income countries. Addressing this economic barrier is crucial to ensure that life-saving treatments can reach all who need them. We present a series of bioprocessing innovations designed to reduce the cost of monoclonal antibody manufacturing and improve global access to these critical therapeutics. The work focuses on developing technologies for media regeneration and recycling, with the goal of reducing the economic and environmental impact of cell culture media in perfusion mammalian cell culture.&#13;
We demonstrate a microfluidic separation device engineered to selectively remove metabolic waste products—specifically ammonia and lactate—from spent media using ion concentration polarization. Building on this foundation, a scalable millifluidic system was developed to enable higher-throughput waste removal. We characterized the impact of media recycling upon batch and perfusion cell cultures. We devised a nutrient supplementation strategy to create ‘regenerated’ media that minimized any effect on cell growth and productivity compared to fresh media.&#13;
To support continuous manufacturing, a perfusion culture system incorporating a microfluidic spiral cell retention device and continuous cell bleed was established, and stable performance was maintained over extended durations. A further innovation introduced a multi-stage waste recovery system that increased media regeneration yield to 87.5%. This recovery rate enabled a self-recycling perfusion bioreactor in which 75% of the media feed was regenerated, without significant impact on cell growth, productivity, or product quality.&#13;
Together, these advances establish a novel biomanufacturing platform that combines electrokinetic waste removal with media regeneration and recycling. The approach is broadly adaptable to mammalian cell culture processes and offers a promising path toward more sustainable, cost-effective, and environmentally responsible production of monoclonal antibodies and other biologics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164151</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming</title>
<link>https://hdl.handle.net/1721.1/164150</link>
<description>Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming
Zhi-Xuan, Tan
How can we build cooperative machines that model and understand human minds — machines that assist us with our goals, coordinate on plans, infer the intentions behind our words, and even learn our norms and values? This thesis presents a scalable model-based approach to building such systems via inverse planning and probabilistic programming. First, we introduce a probabilistic programming architecture that implements a Bayesian theory of mind. This architecture, Sequential Inverse Plan Search (SIPS), performs online inference of human goals and plans by inverting a Bayesian model of incremental human planning. By combining high-performance symbolic planners with sequential Monte Carlo (SMC) inference, SIPS achieves faster-than-real-time speed, while scaling to hundreds of possible goals, and remaining robust to human mistakes due to boundedly-rational planning. Second, we present Cooperative Language-guided Inverse Plan Search (CLIPS), a system that integrates SIPS with large language models (LLMs) to model communicative cooperation. By using LLMs as likelihood functions within probabilistic programs, CLIPS can infer human goals from ambiguous instructions, then provide uncertainty-aware assistance with much higher levels of reliability than LLMs can on their own. In addition, CLIPS can be used to infer the shared intentions of communicating agents from their actions and words. Third, we show how inverse planning can model the acquisition of social normativity, formalizing norm-guided societal behavior as a norm-augmented stochastic game (NSG). In NSGs, agents assume that society follows a shared set of social norms, and infer these norms from the actions of other agents. By doing so, agents can rapidly learn cooperative social norms using orders of magnitude less data than model-free approaches. Finally, we present advances in probabilistic programming infrastructure that have enabled architectures such as SIPS and CLIPS. Through interfaces for programmable SMC and probabilistic programming with LLMs, developers can readily compose modeling and inference subroutines when designing probabilistically coherent intelligent systems. Together, these innovations demonstrate the feasibility and scalability of rational AI engineering for cooperatively intelligent machines, while illuminating the computational and algorithmic foundations of human cooperative intelligence.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164150</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic Study of Traveling Skyrmions in Multi-Sublattice Magnetic Materials</title>
<link>https://hdl.handle.net/1721.1/164149</link>
<description>Atomistic Study of Traveling Skyrmions in Multi-Sublattice Magnetic Materials
Tremsina, Elizaveta A.
The development of novel energy-efficient computing hardware is imperative for the reduction of the carbon footprint and for the extension of computing, mobile and wearable device lifespan. Recent advances have been focused on turning to novel material systems, and one such avenue is magnetic thin films. Bits of information can be encoded by magnetic twisted textures called skyrmions, which can be efficiently driven by applying electrical current. Recently, emphasis has been placed on investigating antiferromagnetic and ferrimagnetic skyrmions, as opposed to the single-sublattice ferromagnetic ones studied earlier, due to their potential for more rapid dynamics and magnetic stability. However, there is a pressing need for a thorough and detailed understanding of the intricacies of skyrmion motion, in particular, limiting velocity, optimization of trajectory, controlled mobility and, notably, the observed dynamic distortions of skyrmion profiles. For this reason, experimental studies are simply not enough to provide a complete picture, since the material parameter space for systems hosting skyrmions is quite large. We perform a comprehensive study combining simulation-based as well as analytical approaches, of the spin-orbit torque motion of skyrmions in a wide host of magnetic materials, ranging from crystalline antiferromagnetic to ferrimagnetic, to ferromagnetic. We systematically analyze the relationship between physical distortions of the skyrmion profiles, based on the action of local Thiele forces, and internal elastic tension forces, providing a quantitative and nuanced explanation of these effects. These results expand the understanding of fundamental properties of magnetic skyrmions, as well as their potential use in spintronics applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164149</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformative Lenses: Empowering Learners with New Perspectives Using Generative AI and Augmented Reality</title>
<link>https://hdl.handle.net/1721.1/164148</link>
<description>Transformative Lenses: Empowering Learners with New Perspectives Using Generative AI and Augmented Reality
Leong, Joanne Sau Ling
Learning is a fundamental human drive that has been shaped by technological advancements over the years. The emergence of generative AI marks a profound shift—its capacity to produce text, images, and video challenges long‐held beliefs about what only humans could create. This shift creates new opportunities for learning, including enabling the design of more customized and personalized learning experiences. Recognizing that learning is deeply influenced by our perceptions of ourselves, others, and our materials and environments, I propose creating transformative lenses powered by generative AI and augmented reality (AR) to adapt what learners perceive, as a means to empower them with new perspectives. I design and implement a set of novel interactive systems and experiences as case studies that address factors including creativity, communication, and motivation. Studying the use of these systems, I gather early evidence that such lenses can help people to overcome their own limiting thoughts and emotions to move towards realizing their full potential. Reflecting on these case studies, I distill key considerations for designing and applying transformative lenses. Finally, I discuss the broader implications of this work at the evolving intersection of generative AI and learning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164148</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Models as Opinion Models: Techniques and Applications</title>
<link>https://hdl.handle.net/1721.1/164147</link>
<description>Language Models as Opinion Models: Techniques and Applications
Brannon, William
Real-time social media platforms now host the news cycle and shape public opinion, while large language models (LLMs) give us new tools to observe and predict those shifts. This dissertation links the new affordances of social media with the predictive power of LLMs to explain -- and forecast -- opinion change. We first quantify the dynamics of news on an influential social platform, then develop LLM-based tools to forecast persuasion and predict heterogeneous treatment effects (HTEs).&#13;
&#13;
Study I — Media tempo and tone. Using 518,000 hours of U.S. talk-radio broadcasts and 26.6 million tweets from elite and mass users, we show that Twitter discourse (i) moves faster at both take-off and fade-out stages of a news event and (ii) sustains greater outrage than radio – despite radio’s often explicitly outrage-focused business model. To our knowledge, this is the first large-scale, data-driven comparison between Twitter and traditional media of both outrage levels and the rate of decay of attention to news.&#13;
&#13;
Study II — Zero-shot persuasion forecasting. Across a diverse set of 28 randomized experiments, LLM-based methods outperform an ensemble of strong baselines at predicting HTEs and deliver good performance at predicting average treatment effects (ATEs) — all without any experiment-specific fine-tuning.&#13;
&#13;
Study III — Transfer and scaling. Fine-tuning LLMs on contemporaneous news coverage boosts HTE (and ATE) prediction performance greatly, to more than 3x baseline performance. A new minibatch-moment-matching (M3) objective lets us train a 400M-parameter model to nearly match the HTE prediction performance of an 8B model at a fraction of the inference cost. Transfer, however, falters out of distribution on held-out experiments and demographic groups, lending support to contextual theories of persuasion.&#13;
&#13;
Overall, we (i) quantify how platform affordances shape the tone and tempo of public discourse, (ii) introduce LLM-based methods that make causal experiments more sample-efficient, and (iii) chart the limits of transfer learning for opinion prediction. Our findings provide practical tools for HTE prediction and help researchers anticipate persuasion dynamics in a media landscape shaped by both humans and machines.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164147</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Across the Scales of the Nucleus: Understanding Short Range Correlations from Medium Modification to Probe Independence</title>
<link>https://hdl.handle.net/1721.1/164146</link>
<description>Across the Scales of the Nucleus: Understanding Short Range Correlations from Medium Modification to Probe Independence
Denniston, Andrew W.
The atomic nucleus presents an intricate system due to the non-linear forces described by Quantum Chromodynamics (QCD) that govern its structure. The range of scales involved is remarkable; the most massive nuclei weigh approximately five orders of magnitude more than the quarks that compose them. The nucleus can be analyzed at various levels, from quarks to hadrons to the nucleus as a whole. Short-Range Correlations (SRCs) within the nucleus play a significant role that spans these diverse scales. At the most fundamental level, SRCs influence the interaction between nucleons. The nucleon-nucleon (NN) interaction, arising from QCD, is crucial in determining nuclear properties. SRCs serve as valuable probes for measuring this NN interaction, as the nucleons within SRCs become effectively decoupled from the rest of the nucleus. Multiple experimental techniques, including electron scattering, have been employed to investigate the NN interaction through SRCs. However, our first project demonstrates that inclusive measurements alone are inadequate to constrain this interaction fully. Moving to the scale of the nucleus, SRCs contribute to the high-momentum tail of the nuclear spectral function. While the low-momentum region is characterized by nucleons exhibiting bulk properties, nucleons begin to pair into SRCs at higher momenta. Our research aims to bridge the understanding between the mean-field portion of the nucleus and its high-momentum SRC components. Additionally, SRCs affect the quark structure of protons, as evidenced by the EMC effect, which indicates that quarks behave differently when protons are embedded within a nucleus—an effect referred to as medium modification. This thesis explores the correlation between SRCs and medium modification across various experimental setups. Finally, we seek to establish an interpretation of the nuclear ground-state. Accomplishing this requires demonstrating that our SRC observables are independent of the probe’s scale and scheme. The concluding project of this thesis illustrates how we utilize triple coincidence quasi-elastic scattering across a range of (Q2 ) values to develop a model-dependent framework for understanding SRC distributions within the nucleus’s ground-state wavefunction.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164146</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering and Engineering the Computation Underlying Large Intelligent Agents</title>
<link>https://hdl.handle.net/1721.1/164145</link>
<description>Discovering and Engineering the Computation Underlying Large Intelligent Agents
Sharma, Pratyusha
The richness of language and intelligent behavior has often been attributed to latent compositional structure. Can we build tools for discovering how deep networks learn and represent this latent structure implicitly? And more importantly, can we use this knowledge to improve generalization in largely structure-less general purpose models or refine our understanding of the world they describe? In this dissertation, I present three perspectives to answer these questions. First, I present experimental methods to functionally characterize the space of learnt solutions in LLMs and demonstrate how this understanding can be used to improve their empirical generalization in a gradient free manner, sometimes by as much as 30% points on language understanding benchmarks. Following that, I show how to decipher the structure of another (black box) language-like system, the naturally arising communication system of sperm whales in the wild, discovering for the first time a unique combinatorial communication system. Finally, I apply insights from these results to equip embodied agents with a latent language of thought—hierarchical and compositional—and show how it can enable long-horizon reasoning and planning in these systems. This dissertation ultimately aims to bridge the gap between natural and artificial intelligence, offering new insights into both the fundamental nature of communication in complex biological organisms in the wild and the development of more powerful, and improved AI systems. A key pattern in the discoveries in this thesis has been how simple structures enable complex externalized behaviors in both biological organisms and AI systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164145</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Volume Mount Devices</title>
<link>https://hdl.handle.net/1721.1/164144</link>
<description>Volume Mount Devices
Han, Alan
As Moore's Law ends and AI demands increasingly tax our climate and resources, the limitations of two-dimensional electronics integration have become critical bottlenecks. Surface-mount devices (SMDs) remain entrenched in industry practice despite being insufficient for today's computing challenges and sustainability needs. This thesis introduces the volume mount device (VMD), a three-dimensional electronics packaging standard that bypasses the traditional die-to-server stack while offering a scalable, reversible framework inspired by natural ecosystems' circularity.&#13;
The VMD approach embeds both electrical function and mechanical structure into modular elements that assemble freely in 3D space. Rather than building circuits on planar PCBs, this system constructs functional circuits by linking components into a self-constraining lattice architecture. My current implementation leverages existing supply chains by incorporating SMD components on small tile PCBs, while establishing a pathway toward eventually replacing SMDs at the IC packaging level.&#13;
I developed a hybrid assembly system combining 3D printing and pick-and-place automation to build multi-layered electronic assemblies efficiently. Where prior work achieved only tens of parts at hundreds of components per hour (CPH), my system demonstrates automated assembly of hundreds of integrated elements at approximately 1000 CPH. I evaluate various geometric configurations, assess performance overhead compared to conventional approaches, and develop cost-effective, self-aligning connector interfaces for reliable joints—creating a foundation for electronics systems that can be assembled, disassembled, and reassembled as needed while improving resilience against supply chain disruptions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164144</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Diffusion Models Towards De Novo Protein Design</title>
<link>https://hdl.handle.net/1721.1/164143</link>
<description>Generative Diffusion Models Towards De Novo Protein Design
Yim, Jason
De novo protein design aims to generate proteins with desired functions by rationally engineering novel protein structures and sequences. The structure requires modeling continuous 3D coordinates of atoms with rigid biochemical constraints of the polymer chain while the sequence is a series of discrete amino acids that should fold into a plausible structure. Understanding the protein function-structure-sequence relationship necessary for protein design is complex, but deep learning has proven promising to learn the relationship from large protein datasets. This thesis aims to develop deep learning models that generate novel structures and sequences that can be guided towards desired functions. We first describe novel generative models that learn to generate protein structures and sequences by developing diffusion models over general state spaces including Riemannian manifolds and discrete tokens. The resulting methods – FrameDiff, FrameFlow, and MultiFlow – demonstrate the ability of diffusion models to extrapolate beyond the training data to generate novel and diverse protein structures and sequences that pass in silico protein design filters. Next, we apply diffusion models to practical protein design challenges by collaborating with experimental and computational biologists to develop RoseTTAFold Diffusion (RFdiffusion). By combining the structure prediction capabilities of RoseTTAFold and diffusion modeling principles, RFdiffusion can generate functional proteins with in vitro validated properties such as high-affinity binders and symmetric protein assemblies. De novo protein design aims to generate proteins with desired functions by rationally engineering novel protein structures and sequences. The structure requires modeling continuous 3D coordinates of atoms with rigid biochemical constraints of the polymer chain while the sequence is a series of discrete amino acids that should fold into a plausible structure. Understanding the protein function-structure-sequence relationship necessary for protein design is complex, but deep learning has proven promising to learn the relationship from large protein datasets. This thesis aims to develop deep learning models that generate novel structures and sequences that can be guided towards desired functions. We first describe novel generative models that learn to generate protein structures and sequences by developing diffusion models over general state spaces including Riemannian manifolds and discrete tokens. The resulting methods – FrameDiff, FrameFlow, and MultiFlow – demonstrate the ability of diffusion models to extrapolate beyond the training data to generate novel and diverse protein structures and sequences that pass in silico protein design filters. Next, we apply diffusion models to practical protein design challenges by collaborating with experimental and computational biologists to develop RoseTTAFold Diffusion (RFdiffusion). By combining the structure prediction capabilities of RoseTTAFold and diffusion modeling principles, RFdiffusion can generate functional proteins with in vitro validated properties such as high-affinity binders and symmetric protein assemblies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164143</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Solve Long-Horizon Robot Manipulation&#13;
Problems</title>
<link>https://hdl.handle.net/1721.1/164142</link>
<description>Learning to Solve Long-Horizon Robot Manipulation&#13;
Problems
Yang, Zhutian
If we want mobile robots that perform multi-step tasks in visually diverse and geometrically complex environments, we need them to quickly decide what to do and how to do it. Manipulating multiple objects in environments with movable and articulated obstacles over time requires the robot to satisfy constraints like collision-freeness, reachability, and action feasibility. For problems with large state spaces, continuous action spaces, and long decision horizons, the hybrid constraint satisfaction problems induced by planners become combinatorially difficult to solve. In this thesis, I will discuss strategies for using offline learning to speed up deploymenttime planning, i.e., using a plan feasibility predictor, a subgoal generator, or a compositional joint continuous constraint solver. I will also present strategies for chaining policies learned from demonstrations using conditional inputs, such as key poses and natural language, for generalization in real-world environments. With the resulting efficient long-horizon manipulation planning system, we can solve complex robotic manipulation problems faster at deployment time. It can also be used to generate diverse large-scale whole-body trajectories as part of the data mixture for training robot foundation models in embodied reasoning, planning, and acting.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164142</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building small domain-specific masked language models&#13;
vs. large generative models for clinical decision support&#13;
and their effects on users.</title>
<link>https://hdl.handle.net/1721.1/164141</link>
<description>Building small domain-specific masked language models&#13;
vs. large generative models for clinical decision support&#13;
and their effects on users.
Sergeeva, Elena
The frequently adopted definition of knowledge defines it as “justified true belief”. As one may notice this definition presents some issues when applied to AI: it is unclear to which degree it is justified to use “humanizing” vocabulary like “belief” or “justification” when describing the performance of an AI system. Traditional explicit knowledge-representation based AI involves reasoning over symbolic representation of statements standing for such “justified true beliefs” [1], the modern connectionist methodology however replaces explicit reasoning with making a prediction based on a set of computations done over weighted continuous representations of the inputs. The continuous representations learned by such systems remain “black box-like”, where the only elements directly understandable by the human user are the model inputs and outputs. In the first part of this thesis I introduce a set of Masked-Language model transformer based models for a diverse set of medical natural language processing tasks including Named Entity Recognition, Negation Extraction and Relation extraction that perform as well or better than bigger prompt-and-generate transformer-based causal language models. In the second part of the thesis, I discuss the modern “prompt-and-generate” approach to natural language processing where both the inputs and the outputs of the model are word-like elements commonly referred to as “tokens”. I explore the nature of token based representation of the input and look at the way token “meaning” is refined at each layer of the successive transformer computation. With respect to the outputs, I explore how people engage with AI generated sequences of tokens that people perceive as “explained” predictions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164141</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language-Centric Medical Image Understanding</title>
<link>https://hdl.handle.net/1721.1/164140</link>
<description>Language-Centric Medical Image Understanding
Wang, Peiqi
This thesis advances medical image understanding by leveraging the multifaceted roles of language: as supervision, prior knowledge, and a medium for communication. We introduce three main contributions: (1) a weakly supervised framework that uses language in clinical reports to guide fine-grained alignment between image regions and textual descriptions, (2) an adaptive debiasing method that uses language prior to improve the robustness of learning algorithms under noisy supervision, and (3) a novel approach for calibrating linguistic expressions of diagnostic certainty, enabling more reliable communication of clinical findings. Together, these methods lead to more accurate, robust, and reliable machine learning systems, ultimately streamlining clinical workflows and improving patient care.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164140</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring spin physics with ultracold atoms</title>
<link>https://hdl.handle.net/1721.1/164139</link>
<description>Exploring spin physics with ultracold atoms
Lee, Yoo Kyung
The dynamics of many interacting spins is an active frontier of research; not only can they explain magnetic phenomena, but they also provide paradigmatic models with deep connections to high-T_c superconductivity, optimization problems, neural networks, and more. Experiments with ultracold alkali atoms in optical lattices have realized spin models with great success. In particular, the isotropic Heisenberg model---the XXX model---was realized more than a decade ago. The ⁷Li apparatus described here was the first to realize a tunable, anisotropic Heisenberg model, also known as the XXZ model.&#13;
&#13;
In this thesis, I will describe how the capabilities of this apparatus were harnessed to characterize the spin models we realize, employ them to observe new resonances, and to contribute to studies in spin squeezing and fundamental physics. First, I will discuss how we prepared and observed phantom helix states: eigenstates of the XXZ Hamiltonian. Our understanding of the contact interactions and the phantom helix states enabled us to observe long-predicted lattice-induced resonances, whose effects can be leveraged as another knob to tune the XXZ Hamiltonian. Furthermore, our control over the spin system allowed us to generate spin-squeezed states,  a paradigmatic form of entanglement for spin ensembles. This is the first time squeezed states were realized with nearest-neighbor contact interactions in a lattice. Finally, our control over the spin degree of freedom and defects in our state preparation allowed us to create pristine periodic lattices with which to study gedankenexperiments in light scattering.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164139</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the Diversity of Fast Radio Bursts with CHIME/FRB</title>
<link>https://hdl.handle.net/1721.1/164138</link>
<description>Probing the Diversity of Fast Radio Bursts with CHIME/FRB
Shin, Kaitlyn
Fast radio bursts (FRBs) are extremely bright extragalactic radio transients that flash for microseconds to milliseconds at a time, most never to repeat again. Encoded in every observed FRB is information from burst propagation effects, giving us clues about their mysterious origins as well as the environments they traveled through. With inferred all-sky rates of hundreds per day, FRBs have held great interest for those interested in extreme astrophysical processes as well as those interested in cosmological properties of the Universe. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has revolutionized the FRB field with its field-leading discovery rate. With CHIME/FRB, we can start to carry out population-level studies of FRBs to constrain their origins and inform their use as cosmological probes. I present the first population-level studies of CHIME/FRB-observed FRBs using the CHIME/FRB Catalog 1 data release and the injections system to account for observational biases. I discover that CHIME/FRB is likely observationally biased against bursts originating from turbulent local environments, and constrain the energy and distance distributions of FRBs. I also present the Catalog 1 dataset updated with channelized raw voltage (“baseband”) data (“BaseCat1”), for which I played a pivotal role. The CHIME/FRB baseband localization pipeline can localize FRBs to arcminute-precision as long as the signal is bright enough to trigger the saving of offline baseband data. I then discuss two single source-studies enabled by the baseband localization pipeline — one discovering repeaters during phases of unusually heightened burst activity, and one using the burst properties of an unusual FRB to probe the properties of its sightline. In the latter study, I constrain the electron density content of a diffuse filamentary structure on the outskirts of the Virgo Cluster, demonstrating the power of FRBs as probes of diffuse media.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164138</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision</title>
<link>https://hdl.handle.net/1721.1/164137</link>
<description>Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision
Willis, Jacob
Fast radio bursts (FRBs) are a novel form of radio transients discovered in 2007. These bright, extragalactic radio signals have an inferred all-sky rate of hundreds of detections per day. The properties of FRBs hold valuable clues about the extreme physical processes driving them while also holding information about the astrophysical plasmas they traverse on their journey to Earth. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has led the field with the hundreds of FRB detections the collaboration has published to date. However, these detections typically have localization regions so large that we cannot identify a single host galaxy, never mind its local environment. To improve upon this, CHIME/FRB has been transformed into a very long baseline interferometry (VLBI) array, drastically increasing the angular resolution of CHIME/FRB from arcminute to sub-arcsecond precision.&#13;
&#13;
In this work, I present my contributions to commissioning the CHIME/FRB VLBI Outrigger station located at the Green Bank Observatory (GBO) in West Virginia. This includes measuring and validating GBO's exact position to enable the localization of FRBs to sub-arcsecond precision.&#13;
&#13;
For VLBI networks spanning thousands of kilometers, the difference in the local ionospheric environments is significant and leads to errors in the CHIME/FRB Outrigger localizations. I present a thin shell model of the ionosphere to parameterize the local ionospheric environment for each VLBI station. This model may be used to interpolate the error induced by the ionosphere in FRB observations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164137</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/164136</link>
<description>Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV
Chou, Pin-Chun
The first measurement of Z-hadron two-particle correlation function are reported in PbPb collisions at √ˢNN = 5.02 TeV, using the PbPb collision data taken in 2018. The integrated luminosity of the PbPb data is 1.67 ±0.03 nb⁻¹ which made the analysis possible for the first time. Collision data with at at least one Z boson with 40 &lt;pT &lt;200 GeV/c are analyzed. The azimuthal angle distributions with respect to the Z bosons, whih are sensitive to modification of in-medium parton shower and medium recoils, are measured in central PbPb collisions. A significant modification of the two particle correlation in pseudorapidity difference and azimuthal angle difference is observed with respect to the reference measured in pp collisions. Those results are compared to phenomenological models that include medium-recoil, medium response and thermalization of the QGP wakes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164136</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>DePUDS: Decentralized Prosocial Urban Development System</title>
<link>https://hdl.handle.net/1721.1/164135</link>
<description>DePUDS: Decentralized Prosocial Urban Development System
Zhang, Yan
Urban areas face severe socio-economic and environmental challenges like housing crises, inequity, and environmental degradation, often worsened by traditional zoning practices. These are typically rigid, inefficient, outdated, and susceptible to obstruction by narrow special interests (NIMBYism), failing to engage the broader community or adapt to evolving needs. This dissertation proposes the Decentralized Prosocial Urban Development System (DePUDS), a novel governance framework designed to overcome these shortcomings by empowering informed collective consensus and including the often-silent majority.&#13;
DePUDS integrates decentralized technologies like blockchain and smart contracts with structured economic incentives, facilitated through an accessible user-friendly Decentralized Application (DApp) to encourage broad participation. This system fosters transparent, inclusive, and equitable urban development. Its core mechanism, adaptive incentive-based zoning, dynamically aligns developer profitability with community-endorsed priorities—such as affordable housing, public amenities, and sustainability—providing flexibility absent in traditional zoning.&#13;
Employing advanced agent-based simulations enhanced by large language models (LLMs), this research rigorously assesses DePUDS's effectiveness across two distinct case studies: Kendall Square in Cambridge, MA (a dynamic innovation hub) and the Inner Richmond District in San Francisco, CA (a culturally rich but housing-constrained neighborhood). Simulation results demonstrate DePUDS significantly aligns development outcomes with community preferences. In Kendall Square, targeted incentives substantially increased affordable housing and public amenities without hindering private investment. In the Inner Richmond, substantial community-driven incentives successfully unlocked constrained development, markedly reducing displacement risks, boosting affordable housing, enhancing amenity access, lowering carbon emissions via density, and preserving local cultural assets.&#13;
The comparative analysis underscores DePUDS's versatility, showing its potential to enhance growth in active markets and stimulate development in constrained ones. Key policy implications point towards structured DApp-based community participation, adaptive incentive zoning, and dedicated funding. While acknowledging practical implementation hurdles (legal, economic, technological), the findings affirm the feasibility, effectiveness, and transformative potential of decentralized, incentive-driven urban governance. This dissertation offers significant theoretical contributions, practical policy guidelines, and future research pathways to foster more inclusive, sustainable, and resilient urban communities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164135</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color</title>
<link>https://hdl.handle.net/1721.1/164134</link>
<description>Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color
Myers, Paris G.
Structural color is nature’s programmable color palette. While pigments and dyes absorb light to produce color, structural color uses nanoscale, light-reflecting structures to appear iridescently colored. We present MorphoChrome, an optical device for real-time, handheld, programmable structural color fabrication. Analogous to painting with light, MorphoChrome creates multicolor, structurally colored designs&#13;
by exposing a commercially available holographic photopolymer film to user-controlled wavelengths. Within the device, red, green, and blue laser diodes go through an optical prism, combining light and producing mixed color outputs on the film. Additionally, we introduce a resin-based process to adhere and integrate the structurally-colored film with flexible and rigid objects and diverse making processes. In this thesis, we focus on the device optical design and fabrication, color-mixing,&#13;
color output UI controller, device aperture tips, and holographic photo-polymer film adherence process. We evaluate the available color space and color resolution, and demonstrate creative fabrication applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164134</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biophysical specializations supporting efficiency in neural&#13;
networks</title>
<link>https://hdl.handle.net/1721.1/164133</link>
<description>Biophysical specializations supporting efficiency in neural&#13;
networks
Toloza, Enrique H.S.
Neuroscience and artificial intelligence (AI) research have long enjoyed a synergistic relationship. AI has drawn key inspiration from the organization and function of the brain, while our understanding of the biological processes underlying computation has been profoundly enriched by studying the behavior of artificial systems. As breakthroughs in generative AI continue to transform our world, and as the need for more sustainable artificial neural systems becomes more urgent, the neuro-AI feedback loop has never been more important. AI needs ever more powerful and efficient systems, and neuroscience needs further insight into how our brains work. The development of more brain-like AI promises solutions to both of these problems. Unfortunately, this has thus far been stymied by two critical challenges: 1) how do we identify the features that make a system brain-like and 2) how do we incorporate these features into artificial networks in a useful and interpretable way? To address the first of these challenges, I will use the remarkable structural and biophysical diversity of the brain as an introduction into what it means for a system to be “brain-like.” This will lead us to a discussion of dendrites, the tree-like structures implicated at virtually every length scale of neural computation. Dendrites will thereafter act as the focal point for our study of brain-like computation. Specifically, I will trace how relatively simple biophysical features defined at the subcellular level can transform the computational landscape of large networks of neurons. To address the second of these challenges, it is necessary to discuss several enduring problems in computational neuroscience, broken down as chapters in this thesis. In Chapter 2, I will present the development of a new model of single-neuron dynamics that is realistic enough to capture the rich dynamics of dendritic spiking but efficient enough for use in simulations of thousands of neurons, thereby filling a long unmet need in the field. In Chapter 3, I will describe a solution to the general problem of training neural networks with arbitrary differentiable dynamics, thus opening the door for the study of countless biophysical phenomena in the context of networks that can learn to perform computations. In Chapter 4, I will use these tools to test several longstanding hypotheses regarding the utility of different biophysical features in neurons, performing first-of-their-kind fair comparisons of the computational performance of spiking networks, rate-based networks, and networks with nonlinear and linear dendrites. Finally, in Chapter 5, I will use insights gained from studying dendrites at the network level to provide a new perspective as to how the structural and biophysical diversity of the brain could emerge from a complex interplay of functional pressures (e.g., task demands) and physical constraints (e.g., space and energy). Together, the chapters of this thesis outline a general quantitative framework for building more brain-like AI for use in both AI research and neuroscience. This framework illustrates how biophysical specializations arising at the level of single neurons shape the emergent dynamics of the brain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164133</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs</title>
<link>https://hdl.handle.net/1721.1/164132</link>
<description>Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs
Agarwal, Gauri
Understanding the ripple effects of events—both real and speculative—is essential for navigating complex futures. Large Language Models (LLMs) have emerged as powerful tools that offer a user-friendly and narrative experience for question answering and reasoning across large corpuses of unstructured data [15, 96]. While LLMs can respond to complex ‘what-if’ questions, they typically provide single, unverifiable answers. Even with retrievalaugmented generation (RAG) that grounds LLM responses on external sources, the opacity of reasoning pathways undermines trust in model outputs [97]. Next Week Tonight builds on the narrative and reasoning capability of LLMs further by enhancing the exploration of what-if futures and making it more transparent and evidencebased. NWT exposes the underlying knowledge graph, allowing users to inspect inference pathways directly. This also enables the generation of multiple, diverse scenarios from a single condition—each following different but explainable causal chains. In testing 15 counterfactual prompts that span diverse news topics, NWT produced scenario narratives that were rated as significantly more causally coherent, transparent, and easier to audit than standard chat completions. Beyond technical performance, NWT reinvents scenario planning as an interactive narrative experience - encouraging curiosity, critical thinking, and deeper engagement with the complexities of future events. By surfacing not only what could happen but why and how, NWT aims to empower analysts, policymakers, and the public to navigate uncertainty with greater clarity and confidence. Github: https://github.com/viral-medialab/next-week-tonight
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164132</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Learnability of General Reinforcement-Learning Objectives</title>
<link>https://hdl.handle.net/1721.1/164131</link>
<description>On the Learnability of General Reinforcement-Learning Objectives
Yang, Cambridge
Reinforcement learning enables agents to learn decision-making policies in unknown environments to achieve specified objectives. Traditionally, these objectives are expressed through reward functions, enabling well-established guarantees on learning near-optimal policies with a high probability — a property known as probably approximately correct (PAC) -learnability. However, reward functions often serve as imperfect surrogates for true objectives, leading to reward hacking and undermining these guarantees. This thesis formalizes the specification and learnability of general reinforcement-learning objectives beyond rewards, addressing fundamental questions of expressivity and policy learnability. I examine three increasingly expressive classes of objectives: (1) Linear Temporal Logic (LTL) objectives, which extend conventional scalar rewards to temporal specifications of behavior and have garnered recent attention, (2) Computable objectives, encompassing a broad class of structured, algorithmically definable objectives and (3) Non-computable objectives, representing general objectives beyond the computable class. For LTL objectives, I prove that only finitary LTL objectives are PAC-learnable, while infinite-horizon LTL objectives are inherently intractable under the PAC-MDP framework. Extending this result, I establish a general criterion: an objective is PAC-learnable if it is continuous and computable. This criterion facilitates the establishment of PAC-learnability for various existing classes of objectives with unknown PAC-learnability and informs the design of new, learnable objective specifications. Finally, for non-computable objectives, I introduce limit PAC-learnability, a practical relaxation where a sequence of computable, PAC-learnable objectives approximates a non-computable objective. I formalize a universal representation of non-computable objectives using nested limits of computable functions and provide sufficient conditions under which limit PAC-learnability holds. By establishing a theoretical foundation for general RL objectives, this thesis advances our understanding of which objectives are learnable, how they can be specified, and how agents can effectively learn policies to optimize them. These results contribute to the broader goal of designing intelligent agents that align with expressive, formally defined objectives—moving beyond the limitations of reward-based surrogates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164131</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-State Quantum Memories for Near-Term Quantum Repeaters</title>
<link>https://hdl.handle.net/1721.1/164130</link>
<description>Solid-State Quantum Memories for Near-Term Quantum Repeaters
Sutula, Madison M.
Over the past decade, quantum computers have emerged as a promising technology to enable transformational advances in information processing and communication and solve problems that are intractable to classical computers. While there is great promise in linking quantum computers together over long distances via quantum channels, these technologies are still under development. Solid-state emitters with coherent spin-photon interfaces, long spin lifetimes, and narrow optical transitions are a leading platform for use as quantum memories in networked quantum repeaters. However, while such emitters have already enabled advanced quantum networking demonstrations in laboratory settings, applying these devices as useful memory devices at scale is a key outstanding challenge. In this thesis, we experimentally investigate solid-state quantum memories for quantum information applications. First, we develop experimental techniques to characterize solid-state emitters with high throughput, enabling both better understanding of the distribution of emitter properties and improved feedback on material preparation and device fabrication. Next, we implement quantum frequency conversion to create a coherent spin-photon interface between silicon-vacancy centers in diamond and optical photons in the low-loss telecom band. Finally, we investigate color centers in other engineering materials, including silicon and silicon carbide, to better understand the fundamental trade space of requirements for solid-state hosts. Together, these efforts represent a significant advance in creating, controlling, and deploying telecom-compatible spin interfaces, paving the way for memory-enabled quantum repeaters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164130</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring Clonal Dynamics in Blood using Single-Cell Measurements</title>
<link>https://hdl.handle.net/1721.1/164129</link>
<description>Inferring Clonal Dynamics in Blood using Single-Cell Measurements
Perry, Andrea N.
In this work, we uniquely tag hematopoietic (blood) stem cells with genetic barcodes and follow their progeny over time to ask whether clonally related cells in myeloproliferative neoplasms (MPNs) favor particular blood cell fates. Myeloproliferative neoplasms are clonal disorders driven most frequently by the JAK2-V617F mutation, which arises in a single hematopoietic stem cell (HSC) and ultimately dominates the normal process of blood cell production. Although all patients carry the same driver mutation, they still branch into three distinct disease forms—essential thrombocythemia (ET), polycythemia vera (PV), or primary myelofibrosis (PMF)—and the reason for this variation remains unknown. One compelling hypothesis is that the JAK2-V617F mutation may arise in HSC subsets with intrinsic biases toward platlet-producing cells (as in ET) or red blood cell precursors (PV). To investigate this question, we analyzed bone-marrow cKit⁺ cells from mice engineered for inducible MPN disease and CRISPR array repair lineage tracing (CARLIN), using single-cell RNA sequencing. Our gene expression analysis shows that the mutation keeps key signaling and stress-response genes switched on and boosts growth-promoting enzymes, collectively pushing blood production toward the myeloid line. At the resolution of individual CARLIN clones (i.e. cells grouped by a shared progenitor), however, we observe no robust mutation-induced lineage bias—an outcome attributable to limited clone recovery and inter-mouse variability. Crucially, this work establishes a scalable analysis pipeline for future, higher-yield CARLIN experiments. Enhancing lineage-tracing sensitivity, barcode diversity, and biological replication will be essential to test whether these interferon-/stress-response and kinase programs manifest as subtle, clone-level fate biases in JAK2-driven MPN.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164129</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principled Approaches for Latency Reduction in Networking Systems</title>
<link>https://hdl.handle.net/1721.1/164128</link>
<description>Principled Approaches for Latency Reduction in Networking Systems
Pit-Claudel, Benoit
Modern networks face unprecedented challenges due to exponential growth in traffic demands, driven by AI workloads in datacenters and the ubiquitous adoption of cloud services across the internet. This dissertation addresses three critical challenges in network systems: efficient scheduling of inference tasks, performance optimization in hybrid networks, and memory-efficient load balancing in datacenters.&#13;
&#13;
First, we introduce Nona, a stochastic scheduling framework that leverages queueing theory to optimize task placement in datacenter environments. By employing randomized algorithms and considering both network and compute constraints, Nona demonstrates multiple orders of magnitude improvements in job completion times while maintaining implementation simplicity. Nona proposes stochastic scheduling, in which the complexity of the scheduling problem is moved to an offline phase. When handling jobs online, stochastic schedulers are oblivious to the instantaneous state of the network and only rely on predetermined allocation probabilities to make lightning-fast decisions. Second, we present LINC, an in-network coding solution designed for hybrid backbone networks. Through comprehensive mathematical analysis and simulation, we highlight the benefits of network coding in cases where no modifications of the end-hosts are possible. Finally, we develop Sirona, a memory-efficient version of a reactive subflow spraying mechanism suited for hardware deployment. We show that Sirona can achieve competitive performance in homogeneous and heterogeneous datacenter networks while keeping a low memory footprint.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164128</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward Modeling for Bolometry and Disruption Mitigation in Tokamaks or How to Kill Your Plasma With Confidence, Style, and Pizzazz</title>
<link>https://hdl.handle.net/1721.1/164127</link>
<description>Forward Modeling for Bolometry and Disruption Mitigation in Tokamaks or How to Kill Your Plasma With Confidence, Style, and Pizzazz
Stein-Lubrano, Benjamin
The tokamak is a promising approach to magnetic confinement fusion. Tokamak functionality is threatened by plasma disruption events, which can damage critical machine components. Disruption damage can be mitigated by high-Z impurities, delivered by Massive Gas Injection (MGI) or Shattered Pellet Injection (SPI). Impurities radiate energy out of the plasma and onto the first wall. Evenly distributed radiation causes less damage than unmitigated disruption pathways, which deliver concentrated heat loads. In order to successfully develop and deploy mitigation systems, it is important to accurately measure and characterize disruption radiation. Accurate measurement is challenged by fast disruption timescales and highly asymmetric radiation patterns, which push the time and spatial resolution limits of radiant heat sensors. Previous radiation analysis approaches are typically limited to two dimensions or less by the highly under-determined nature of tomographic reconstruction and limited spatial resolution of sensors. Two dimensional analysis is often inaccurate for disruption radiation, which can be highly three dimensional as a result of localized impurity sources and fast 3D MHD events. In this thesis, I present a new algorithm for 3D radiation analysis in tokamak disruptions, called Emis3D. When Emis3D is applied to mitigated disruptions on the JET tokamak, a significant injection plume radiation effect in mitigated disruptions is revealed. When this effect is included in radiated energy calculations, the mitigated radiation fraction of plasmas with high thermal energy content is significantly improved, indicating that thermal mitigation is more effective than previously thought. Emis3D can also be used as a design tool to evaluate potential radiant heat sensor layouts. When applied to the SPARC tokamak, Emis3D demonstrates that toroidally skewed sensor sightlines improve spatial resolution and reduce blind spots, allowing more accurate measurement.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164127</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Milky Way with Stars</title>
<link>https://hdl.handle.net/1721.1/164126</link>
<description>Understanding the Milky Way with Stars
Ou, Xiaowei
"How do galaxies form?" is one of the most important questions in modern astrophysics. Hierarchical growth, the most plausible theory behind galaxy formation, suggests that galaxies, including the Milky Way, assemble through the accretion of smaller systems, over a scaffolding of the invisible Dark Matter. Such growth is evidenced by the differences in stellar structures found in the Galaxy over the last few decades, accelerated most recently by the Gaia space mission. Yet, we still lack a full picture of the formation of the Milky Way and its stellar components, and we are even further in understanding its underlying Dark Matter distribution. For the latter, discrepancies between observations and predictions from CDM model at galactic scales have sparked debate about how well this model accounts for the evolution of the Milky Way. Stellar tracers provide a powerful tool for examining these discrepancies, helping us explore the hierarchical assembly of galaxies in the Local Group and test different models for dark matter. At the same time, cosmological simulations and machine learning techniques offer a bridge between the theory and observations.&#13;
&#13;
In this thesis, I combine observation of stellar kinematics and chemistry with cosmological simulations to understand the formation and evolution of the Milky Way and its satellite dwarf galaxies. I map the dark matter distributions in the Milky Way and one of its ultra-faint dwarf galaxies using stellar dynamics, combining simulations of tidal disruption with observational data to study ongoing merger events and how hierarchical assembly shaped the Milky Way today. I conduct robust machine learning searches of kinematic substructures from disrupted dwarf galaxy debris in the Milky Way and utilize stellar heavy element abundances to probe the galaxies that merged with the Milky Way in the past. Lastly, I develop synthetic surveys from simulations to bridge gaps between theory and observation, testing the robustness of current and future methodologies in understanding how the Milky Way came to be.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164126</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coating Thermal Noise in Gravitational-Wave Detectors</title>
<link>https://hdl.handle.net/1721.1/164125</link>
<description>Coating Thermal Noise in Gravitational-Wave Detectors
Demos, Nicholas
The direct detection of gravitational waves, originating from cataclysmic events such as black hole and neutron star mergers, has ushered in a new era of observational astronomy. These signals offer unique insights into astrophysical phenomena and fundamental physics, but fully realizing their potential requires continued improvements in detector sensitivity. A primary factor limiting the performance of current ground-based interferometers like Advanced LIGO and Advanced Virgo is thermal noise arising from the highly reflective multilayer coatings on the test mass mirrors. Reducing this coating thermal noise, particularly its Brownian component, while simultaneously maintaining exceptionally low optical absorption and scatter is necessary to advance detector capabilities.&#13;
&#13;
This thesis addresses this challenge through the characterization and development of alternative coating materials and designs. Central to this work is a dedicated experimental apparatus employing a high-finesse folded optical cavity and a multimode co-resonance technique. This system enables direct, high-precision measurements of coating thermal noise in the frequency band relevant to gravitational-wave detectors and allows for relatively rapid evaluation of candidate coatings, providing timely feedback for materials development.&#13;
&#13;
Coating materials such as niobia-based oxides, hafnia-tantala mixtures, and substoichiometric silica, were explored employing strategies like compositional optimization, post-deposition annealing, and multimaterial designs with buried layers. Progress toward lower-noise coatings is demonstrated. Highly reflective coatings based on optimized titania-silica, titania-germania, and ternary silicon nitride structures achieved thermal noise levels approximately 75% that of current detector coatings. These coatings also exhibited exceptionally low optical absorption, reaching levels near 1 part-per-million following appropriate heat treatment. While challenges related to defect formation during annealing and discrepancies between different noise measurement methodologies were identified, ongoing research, particularly on defect mitigation in materials like titania-germania, continues to advance the field. The findings presented here contribute to the materials science foundation for improving current gravitational-wave detectors and guiding the design of future observatories.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164125</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Domain Astrophysics with the Transiting Exoplanet Survey Satellite</title>
<link>https://hdl.handle.net/1721.1/164124</link>
<description>Time-Domain Astrophysics with the Transiting Exoplanet Survey Satellite
Jayaraman, Rahul
The Transiting Exoplanet Survey Satellite (TESS) is conducting an all-sky survey with the primary aim of detecting planets orbiting nearby stars. However, its large field of view and 200 s imaging cadence are useful for other science cases, ranging from stellar astrophysics to transient science. This thesis focuses on using TESS to study both the circumstellar environment and stellar interiors, as well as using the satellite to detect and characterize optical emission from gamma-ray bursts (GRBs). Chapter 2 focuses on the discovery of HD 135348, a "rigidly rotating magnetospheric" star–wherein the stellar magnetic field traps dust in a co-rotating orbit and leads to complex periodic photometric modulations–using solely photometric data. Chapter 3 focuses on the discovery of a long-period subdwarf B (sdB) star using 20 s cadence TESS data and proposes a novel formation mechanism for long-period sdB stars that relies upon stable, nonconservative mass transfer. Chapters 4 and 5 focus on pulsating stars in close binaries, and the evolutionary insights that these "tidally tilted" pulsations enable. In particular, we focus on developing models to track the amplitude and phase of these pulsations as a function of orbital phase, as well as tools to perform physically-motivated modeling of the binary components. Chapters 6-7 focus on the optical signatures of gamma-ray bursts in TESS, and analyze the prompt optical flash that is often observed contemporaneously with the high-energy emission from these bursts. Chapter 7, in particular, aims to connect the prompt optical flash to the high-energy spectral energy distribution (SED), and explains the suppression of the optical flash (compared to the extrapolation of the high-energy SED) by invoking dust extinction in the host galaxy. This thesis represents a significant step forward in both stellar and transient astrophysics; throughout this work, we emphasize the use of an unconventional tool–TESS–to pursue timely scientific questions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164124</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Intelligence that can Interact with the Physical World</title>
<link>https://hdl.handle.net/1721.1/164123</link>
<description>Building Intelligence that can Interact with the Physical World
Wang, Tsun-Hsuan (Johnson)
Recent advances in Artificial Intelligence (AI) have demonstrated remarkable success in parsing, reasoning, and generating digital content across modalities such as natural language, speech, images, videos, and 3D data. However, these breakthroughs have yet to extend meaningfully beyond the digital realm into the physical world. Developing AI for physical interaction poses challenges such as limited grounding, scarce physical data, and high reliability demands in safety-critical settings. This thesis takes a holistic approach to building intelligence that can interact with the physical world – through the lenses of data, brain, and body. Data is the fuel powering highly capable AI systems. We present methods for data-driven simulation that synthesize sensor measurements from physical processes, and knowledge-driven simulation that leverages large language models to generate actor behaviors and scenarios. By reverse engineering the generative processes behind physical data, we address data scarcity while enabling scalable and effective evaluation. The brain, driven by data, demands a deep understanding of the physical world and reliable interaction with it. We introduce methods to bridge the internet-scale knowledge of digital AI with the physical world to improve generalization and interpretability. For greater reliability, we integrate control-theoretic modules into AI models to enable certifiability. Beyond the behavioral intelligence, the body plays a crucial role in physical interaction. We demonstrate how morphological intelligence can emerge from computation and show how pre-trained generative AI models (brain), when augmented with physics-based simulation that provides feedback on generated data, can be applied to robot design. To this end, this thesis explores how digital AI can be extended into the physical world through a comprehensive investigation of data, brain, and body – laying the groundwork for building physical AI.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164123</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomolecular Modeling at Scale</title>
<link>https://hdl.handle.net/1721.1/164122</link>
<description>Biomolecular Modeling at Scale
Wohlwend, Jeremy
Predicting the structure and interactions of biomolecules is a fundamental problem in computational biology, with broad implications for disease understanding and drug discovery. Advances in deep learning have enabled remarkable progress, but scaling these approaches to the varied and complex realities of biology is a persistent challenge. This work introduces deep learning methods for biomolecular modeling at scale, designed for efficiency, adaptability, and accessibility. The early chapters present models developed in the general molecular domain, including prediction of structure and interactions for proteins, nucleic acids, and small molecules. To demonstrate how these methods extend to specific biological problems, the latter portion of this work focuses on modeling T cell receptor recognition. As a key immunological mechanism, it highlights the promise of scalable models, but also their present limitations in capturing fine-grained molecular selectivity. Together, these contributions define a framework for bridging foundational models and domain-specific applications, with the potential to scale, and meet the demands of increasingly complex biological systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164122</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimessenger signatures of compact binaries</title>
<link>https://hdl.handle.net/1721.1/164121</link>
<description>Multimessenger signatures of compact binaries
Mo, Geoffrey Kwan Lok
Gravitational waves and electromagnetic observations provide complementary views into some of the most extreme objects in the Universe. In this thesis, I present studies of multimessenger compact binaries from two angles: electromagnetic follow-up of gravitational-waves, and gravitational-wave follow-up of electromagnetic sources. I first describe technical and computational efforts to enable the distribution of alerts of kHz gravitational-wave sources as a member of the LIGO--Virgo--KAGRA collaboration, and to improve localizations of these events by folding in galaxy catalog information. I then detail work to enable electromagnetic follow-up observations of binary neutron star and neutron star--black hole mergers with two telescopes, the Transiting Exoplanet Survey Satellite (TESS) and the Wide-field Infrared Transient Explorer (WINTER). Approaching multimessenger observations from the opposite direction, I describe a search for gravitational waves coincident with fast radio bursts from the only Galactic fast radio burst source. Lastly, I perform an electromagnetic study of Type Ia supernovae in the mid-infrared, whose white dwarf binary progenitors will be mHz gravitational-wave sources for the future LISA space mission.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164121</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Network Systems Design for Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164120</link>
<description>Efficient Network Systems Design for Machine Learning
Yang, Mingran
Machine learning (ML) is transforming modern life by powering a diverse range of groundbreaking applications. As ML models and datasets expand, the scale of training and inference workloads in modern datacenters is increasing at an unprecedented pace. As the demand for computing resources grows, the need for low-latency and energy-efficient network systems becomes increasingly urgent.&#13;
&#13;
This thesis introduces efficient network systems designed to support machine learning workloads. It presents three key systems: Trio-ML, which accelerates ML training; Lightning, which enhances ML inference efficiency; and on-fiber photonic computing, a forward-looking vision for next-generation computing systems.&#13;
&#13;
The first system, Trio-ML, accelerates data-parallel distributed ML training by leveraging in-network computing on Juniper Networks' programmable chipset Trio. Trio-ML features two key designs: in-network aggregation, which utilizes Trio packet processing threads to aggregate gradients directly inside the network, and in-network straggler mitigation, which utilizes Trio timer threads to detect and address stragglers. We prototype Trio-ML on a testbed with three real DNN models (ResNet50, DenseNet161, and VGG11) to demonstrate its effectiveness in mitigating stragglers while performing in-network aggregation. Our evaluations show that when stragglers occur in the cluster, Trio-ML outperforms today's state-of-the-art in-network aggregation solutions by up to 1.8x.&#13;
&#13;
The second system, Lightning, is the first reconfigurable photonic-electronic smartNIC to serve real-time ML inference requests. Lightning uses a fast datapath to feed traffic from the NIC into the photonic domain without creating digital packet processing and data movement bottlenecks. To do so, Lightning leverages a novel reconfigurable count-action abstraction that keeps track of the required computation operations of each inference packet. Our count-action abstraction decouples the compute control plane from the data plane by counting the number of operations in each task and triggers the execution of the next task(s) without interrupting the dataflow. We evaluate Lightning's performance using four platforms: prototype, chip synthesis, emulations, and simulations. Our simulations with large DNN models show that compared to Nvidia A100 GPU, A100X DPU, and Brainwave smartNIC, Lightning accelerates the average inference serve time by 337x, 329x, and 42x, while consuming 352x, 419x, and 54x less energy, respectively.&#13;
&#13;
Building on the in-network computing and photonic computing concepts discussed in Trio-ML and Lightning, we present a forward-looking vision for future computing systems. We argue that pluggable transponders are a prime platform for performing photonic computing inside the network without having to replace networking switches and routers. Optical transponders are ubiquitous in today's wide-area and datacenter networks, giving us a unique opportunity to re-purpose them for photonic computing. To this end, we introduce on-fiber photonic computing, explore key research challenges in bringing this vision to reality, and discuss real-world applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164120</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wireless, Battery-Free, High-Sensitivity 5G RF Energy Harvesters for Next Generation IoT Sensor Tags</title>
<link>https://hdl.handle.net/1721.1/164119</link>
<description>Wireless, Battery-Free, High-Sensitivity 5G RF Energy Harvesters for Next Generation IoT Sensor Tags
Yildirim, Deniz Umut
The Internet of Things (IoT) is revolutionizing various industries, enabling a new wave of smart applications such as automated asset tracking in warehouses, substation monitoring in smart grids, and precision agriculture. However, as IoT devices proliferate, powering these devices in a sustainable and maintenance-free manner has become a critical challenge. Traditional IoT systems rely on batteries, which present issues of limited lifespan, environmental impact, and maintenance costs, especially in large-scale deployments. As a result, the development of battery-free IoT devices powered by ambient energy harvesting has gained significant attention. Among various energy-harvesting technologies, radio frequency (RF) energy harvesting has emerged as a promising solution for powering IoT devices. By harvesting energy from ambient RF signals in licensed frequency bands, RF energy-harvesting systems eliminate the need for batteries and allow for continuous, maintenance-free operation. This is especially crucial in environments where battery replacement is impractical or impossible, such as in large industrial warehouses, remote infrastructure, and hazardous environments. However, achieving high sensitivity and reliable operation in RF energy-harvesting systems poses several challenges. High-sensitivity rectifiers are required to capture and convert weak RF signals into usable energy, but integrating these rectifiers with ultra-low power baseband data processing circuits remains a significant hurdle. Moreover, antenna-rectifier matching calibration must be compatible with the duty-cycled operation of these tags, where brief communication periods are followed by long charging intervals. Additionally, the antenna system must be robust to detuning when placed on various objects, ensuring that the system can operate effectively in diverse environments. This thesis presents two integrated circuits to work towards these goals. The first chip is designed with the goal of minimizing the charging time as much as possible, which is critical in scenarios such as inventory management in warehouses, and tamper detection. The goal was to achieve &lt; 1-minute charging time while maintaining sensitivity competitive with the state-of-the-art. Unlike previous harvesters that either focused solely on sensitivity without integrating baseband processing and communication, or included those features but considered continuous communication at low sensitivity, the IC developed in this work achieves a sensitivity of −31 dBm and is capable of backscattering data approximately 18 seconds after a cold start. It also provides a detailed description of the difficulty of achieving higher sensitivities at higher 5G frequencies. The second chip in this thesis builds upon the first one and integrates an analog front-end to convert sensor data for environmental monitoring. We implemented an antenna-rectifier calibration method that is maintained as long as there if RF power, even though the tag goes into long charging periods. Even though the charging time, or the data readout interval, for these tags is more relaxed compared to the inventory management applications, we have also developed a design methodology to minimize the energy required to generate a data packet for backscattering, through which we were able to keep the charging time at 4 minutes while having additional functionalities and backscattering at a higher data rate compared to the first chip. Finally, a simple shielding method was implemented to enable the tags to be placed on any objects without resonance frequency detuning. All of these were achieved while still obtaining a sensitivity of −30 dBm, competitive with the state of the art. In addition, the third project investigates the use of heterogeneously integrated “beyondCMOS” devices to enhance overall rectifier performance. These emerging devices, fabricated by Palacios Group at MIT, show promise in overcoming sensitivity limitations commonly found in rectifiers, thereby extending the range and coverage of energy-harvesting IoT systems. We conduct a detailed characterization of these devices, highlighting their unique physical behaviors not present in standard CMOS technology, and provide system-level design guidelines for building improved rectifiers. Preliminary simulation results show that rectifiers using negative-capacitance field-effect transistors (NCFETs) can harvest up to four times as much power than their CMOS-based counterparts, while maintaining the same sensitivity. This thesis outlines the design, implementation, and evaluation of all three systems. The two aforementioned ICs are tested both in simulation and in real-world scenarios such as a typical office environment. Meanwhile, the novel device technologies are explored through simulation, demonstrating their significant potential for next-generation rectifier design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164119</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Additive Manufacturing of Electrical Machines and Electronic Devices</title>
<link>https://hdl.handle.net/1721.1/164065</link>
<description>Additive Manufacturing of Electrical Machines and Electronic Devices
Cañada Pérez-Sala, Jorge
Recent advancements in the additive manufacture of electronics and electrical machines have led to successful demonstrations of 3D-printed passive (e.g., resistors, capacitors, inductors) and active (e.g., transistors) electronic components, as well as magnetic cores and power transfer devices. However, each new demonstration of 3D-printed functional devices has typically required increasingly specialized and expensive manufacturing hardware. This work opposes that trend by developing a technology capable of fabricating all such devices on a single, affordable machine: a material extrusion 3D printer. Material extrusion stands out among additive manufacturing technologies for its widespread availability and its compatibility with monolithic multi-material manufacturing, essential for the fabrication of functional electromagnetic devices. These attributes, together with its well-established ability to fabricate mechanically functional parts, make material extrusion a promising technology for the single-step fabrication of electronics and electrical machines, and for their monolithic integration into complex devices, such as custom functionalized prostheses, robots, and space exploration hardware. In this research, a desktop 3D printer was transformed into an almost-universal manufacturing machine capable of fabricating a myriad of electrically, magnetically, and mechanically functional devices, using various feedstock formats (e.g., filament, pellets, ink). With this machine, milestones such as the fabrication of the first semiconductorfree, fully 3D-printed logic gates, and that of the first fully 3D-printed motor, have been achieved. Built for under $4000 in parts, the modified 3D printer opens the door to the democratization of electronics and electrical machine manufacturing, empowering institutions and individuals alike, and serving as an educational tool to introduce advanced manufacturing to new generations. Additionally, this work investigates optimization strategies for planar inductors and alternative techniques for the creation of miniaturized, three-dimensional, electrically functional components via two-photon polymerization. By demonstrating novel methods and applications, this thesis advances the state of the art in the additive manufacture of electromagnetic devices and paves the way toward the decentralized fabrication of electrical machines and electronic devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164065</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Theory of Representation Learning: How Hidden Relationships Power Algorithms that can Learn without Labels</title>
<link>https://hdl.handle.net/1721.1/164064</link>
<description>A Unified Theory of Representation Learning: How Hidden Relationships Power Algorithms that can Learn without Labels
Hamilton, Mark T.
How does the human mind make sense of raw information without being taught how to see or hear? This thesis presents a unifying theory that describes how algorithms can learn and discover structure in complex systems, like natural images, audio, language, and video - without human input. This class of algorithms has the possibility to extend our own understanding of the world by helping us to see previously unseen patterns in nature and science. At the core of this thesis’ unified theory is the notion that relationships between deep network representations hold the key discover the structure of the world without human input. This work will begin with a few examples of this principle in action; discovering hidden connections that span cultures and millennia in the visual arts, discovering visual objects in large image corpora, classifying every pixel of our visual world, and rediscovering the meaning of words from raw audio, all without human labels. In the latter half of this thesis, we will present two unifying mathematical theories of unsupervised learning. The first will explain why relationships between deep features can rediscover the semantic structure of the natural world by connecting model explainability, cooperative game theory, and deep feature relationships. The second mathematical theory will show that relationships between representations can be used to unify over 20 common machine learning algorithms spanning 100 years of progress in the field of machine learning. In particular, we introduce a single equation that unifies classification, regression, large language modeling, dimensionality reduction, clustering, contrastive learning, and spectral methods. This thesis uses this unified equation as the basis for a “periodic table of representation learning” that predicts the existence of new types of algorithms. We show that one of these predicted algorithms is a state-of-the-art unsupervised image classification technique. Finally, this work will summarize the key findings and share ongoing and future directions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164064</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Score Estimation for Generative Modeling</title>
<link>https://hdl.handle.net/1721.1/164063</link>
<description>Score Estimation for Generative Modeling
Jayashankar, Tejas Kumar
Recent advances in score-based (diffusion) generative models have achieved state-of-the-art sample quality across standard benchmarks. Building on the remarkable property of these models in estimating scores, this thesis presents three core contributions: 1) new objectives to reduce score estimation error, 2) a novel Bayesian-inspired optimization framework for solving inverse problems, and 3) a fast one-step generative modeling framework that is based on a novel amortized score estimation framework. In the first part of this thesis, we introduce two new score estimation objectives with applications to both implicit and diffusion-based generative models. To improve spectralbased non-parametric estimators, we propose a theoretically optimal parametric framework that learns the score by projecting it onto its top-L principal directions. Additionally, inspired by matrix-valued kernel methods, we present a second approach that lifts the score into the space of outer products, and minimizes the distance between the estimated and true scores in this higher-order space. In the second part, we shift focus from score estimation to leveraging diffusion models as data-driven priors for solving inverse problems. Centering our development around the problem of source separation, we introduce a novel algorithm inspired by maximum a posteriori estimation. This approach combines multiple levels of Gaussian smoothing with an α-posterior, enabling effective signal separation using only independent priors for the sources. We demonstrate the effectiveness of this method through its application to interference mitigation in digital communication signals. Finally, we outline how this framework can be naturally extended to tackle a broader class of inverse problems. In the final part, we return to the fundamental challenge of efficient sampling, which is critical for enabling practical data-driven engineering systems. We propose a novel generative modeling framework that enables training a one-step neural sampler from scratch. At the core of this method is a new objective based on multi-divergence minimization, guided by a novel approach for score estimation of mixture distributions. Our framework is simple to implement, stable during training, unifies several existing approaches, and achieves state-of-the-art performance in image generation tasks. Furthermore, we discuss how this framework can be naturally extended to multi-step neural sampling and adapted for fast posterior sampling—an essential component in simulation-based inverse problem solvers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164063</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superconducting Nanowire Integrated Circuits for Scalable Cryogenic Memory</title>
<link>https://hdl.handle.net/1721.1/164062</link>
<description>Superconducting Nanowire Integrated Circuits for Scalable Cryogenic Memory
Medeiros, Owen A.
Superconducting nanowire integrated circuits (SNICs) are a promising class of cryogenic electronics that harness the zero resistance, high kinetic inductance, and nanoscale geometry of ultrathin superconducting wires to implement logic, memory, amplification, and sensing with minimal energy dissipation. Unlike Josephson-junction-based circuits, SNICs support compact, planar layouts compatible with single-layer fabrication and operation in unshielded cryogenic environments. This thesis develops superconducting nanowire memory (SNM) as a scalable implementation of SNICs. A modular cell architecture is introduced, exploiting hysteretic switching and inductive asymmetry to enable nonvolatile digital state storage with zero static power consumption. A hierarchical design framework is established, combining automated layout generation, electrothermal simulation in LTspice, and microscopic modeling using the time-dependent Ginzburg–Landau (TDGL) formalism. To enable scalable integration, this work implements a row–column SNM array layout and demonstrates fabrication across full 4-inch wafers using a planar, singlelayer process. Cryogenic measurements validate reliable operation in both single cells and multi-cell arrays, confirming the predictive accuracy of the design and modeling framework. Tradeoffs in bias current levels, pulse timing, and read/write conditions are systematically evaluated through cryogenic measurements, revealing their impact on bit error rate, operational margins, and energy efficiency across single cells and arrays. Together, these contributions establish SNICs as a viable and extensible platform for cryogenic memory, providing the tools, models, and infrastructure needed to enable broader adoption in quantum computing, neuromorphic systems, and other energy-constrained cryogenic applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164062</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Next Generation Operating Systems for the Datacenter</title>
<link>https://hdl.handle.net/1721.1/164061</link>
<description>Next Generation Operating Systems for the Datacenter
Fried, Joshua
Modern datacenters face a fundamental challenge: handling demanding real-time and dataintensive workloads that require both microsecond-scale low latency and high throughput, while simultaneously achieving high resource utilization and efficient multi-tenancy. Traditional operating systems, designed for an era of slower hardware, introduce significant overheads to microsecond-scale I/O that prevent applications from exploiting the full performance of the underlying hardware. Furthermore, their millisecond-scale resource management is ill-equipped to handle the microsecond-level burstiness of modern workloads, leading to costly overprovisioning and idle resources. Recognizing the performance limitations imposed by traditional OSes, a common workaround has emerged: letting applications communicate directly with hardware, bypassing the OS entirely. While this approach offers performance gains by removing the OS from the critical path, existing kernel-bypass solutions require dedicated resources, extensive application rewrites, and provide weak isolation, making them unsuitable for general-purpose, shared environments. This thesis presents a new datacenter operating system, composed of three integrated systems: Shenango, Caladan, and Junction. Together, they preserve the high-performance, low-overhead I/O benefits of kernel bypass, while providing efficient resource management, strong isolation for multi-tenant workloads, and compatibility with unmodified software. First, Shenango enables applications to bypass traditional OS-mediated I/O without dedicating CPU cores solely to polling. Next, Caladan ensures that idle resources can be used productively by other applications by actively managing competition for microarchitectural resources, thereby preserving each application’s high I/O performance and responsiveness. Finally, Junction overcomes several common limitations of kernel-bypass solutions, bringing these benefits to all applications by preserving compatibility with existing software and reducing memory and polling overheads. Collectively, these systems provide the advantages of direct hardware access without sacrificing the flexibility or efficiency of a general-purpose operating system. This work demonstrates that high I/O performance, efficient resource utilization, and broad application compatibility can indeed coexist, paving the way for a new generation of datacenter operating systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164061</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Development of Healthcare AI: From Data Curation,&#13;
Algorithm Optimization, Benchmark Design and Clinical Applications</title>
<link>https://hdl.handle.net/1721.1/164060</link>
<description>Systematic Development of Healthcare AI: From Data Curation,&#13;
Algorithm Optimization, Benchmark Design and Clinical Applications
Gao, Mingye
Artificial intelligence (AI) has brought transformative changes to healthcare industry in the recent years from various aspects, such as patient care, disease diagnosis and medical research. As healthcare systems worldwide face increasing pressure from aging populations and rising chronic disease rates, there is an urgent need for systematic approaches to develop reliable and safe AI solutions. This thesis advances the systematic development of healthcare AI through four interconnected components: data curation, algorithm optimization, benchmark design, and clinical applications. The primary contribution of this thesis focuses on establishing a comprehensive pipeline for healthcare large language models (LLMs), spanning from data curation to clinical deployment. At the data level, a rule-based filtering framework was developed to select high-quality subsets from the large pre-training corpora, significantly improving both continue pre-training and fine-tuning performance of LLMs. For safety alignment, an automated pipeline was developed for preference learning that includes preference dataset synthesis, rule-based and data-adaptive annotation, and reward model training. Additionally, two novel benchmarks were created to ensure reliability and safety of LLMs in healthcare tasks: one assessing demographic biases of LLMs across common diseases, while another assessing models’ ability to reject illogical requests from users in drug-related scenarios. Finally, LLMs were used to generating patient-friendly educational content for clinical trials, demonstrating their role in improving patient education and engagement in clinical trials. This systematic progression from data to deployment establishes a blueprint for developing safe and effective language models in healthcare settings. Beyond language models, machine learning techniques were applied on an additional healthcare task. In this project, a novel approach combining normalized cross-correlation and attention graph convolutional recurrent networks was developed to realize contactless, continuous and reliable radar-based vital signs monitoring in dynamic home environments. Through systematic data collection and algorithm optimization, the accurate heart rate can be obtained across varying radar-subject distances (2-2.5m) and subject orientations, demonstrating robust performance in real-world conditions through extensive validation in four test houses with six subjects. Collectively, these contributions advance healthcare AI development across 2 fronts: establishing frameworks for safe and effective deployment of language models in healthcare settings and enabling reliable and continuous health monitoring at-home without wearable devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164060</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks</title>
<link>https://hdl.handle.net/1721.1/164059</link>
<description>SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks
Zarkos, Christos V.
Serialization frameworks are a fundamental operation of datacenters, as they enable language- and platform-neutral communication and storage. However, software serialization faces major performance bottlenecks, resulting in a significant fraction of cloud cycles dedicated to this process. Prior work has proposed specialized hardware accelerators to address these overheads. While these proposals achieve considerable speedups, they are expensive in terms of verification, fabrication, and deployment, and often hardcode too many details about the (de)serialization framework in hardware. We propose SERenaDE, a serialization framework designed to integrate general-purpose accelerators currently deployed in datacenters in order to accelerate and offload serialization to hardware. Specifically, we repurpose the Intel In-Memory Analytics Accelerator (IAA), an accelerator engine offering fast compression, to enable fast and transparent to the user serialization and deserialization, completely removing software serialization from the execution pipeline. We evaluate our system on latest-generation production machines, both with synthetic microbenchmarks, and open-source representative fleet-wide benchmarks. Our results show comparable performance in terms of per-request latency across all types of messages, while significantly improving throughput - especially at the tail -, maintaining thread scalability and achieving high compression ratios alongside substantial speedups for larger messages. Under 95th latency percentile latency constraints SERenaDE improves serialization and deserialization throughput by 13% and 30% respectively, while achieving from 0.2x to 6.94x smaller serialized message sizes for messages of a total memory layout larger than 4KB.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164059</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Biomolecular Interactions with Generative Models</title>
<link>https://hdl.handle.net/1721.1/164058</link>
<description>Modeling Biomolecular Interactions with Generative Models
Corso, Gabriele
In 2021, DeepMind’s AlphaFold2 revolutionized single-chain protein structure prediction achieving atomic accuracy, solving a longstanding challenge in biology. However, understanding biomolecular interactions, a critical problem for advancing drug discovery and biological research, remained unsolved. This thesis presents our research to redefine the machine learning approach to this problem, modeling structures with a new generative paradigm and tailoring the neural architectures and learning tasks to the specific challenges that arose. These ideas combined with significant engineering efforts led us to develop a class of open-source models from DiffDock to the recent Boltz-1. These have significantly pushed our ability to understand biomolecular interactions, they have been widely adopted in industry and academia to help with drug development and protein design and they have opened the door to new research paradigms to push biological research further.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164058</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Hardware Accelerators for Solving Sparse Linear&#13;
Systems</title>
<link>https://hdl.handle.net/1721.1/164057</link>
<description>Designing Hardware Accelerators for Solving Sparse Linear&#13;
Systems
Feldmann, Axel
Solving sparse linear systems is a key primitive that sits at the heart of many important numeric algorithms. Because of this primitive’s importance, algorithm designers have spent many decades optimizing linear solvers for high performance hardware. However, despite their efforts, existing hardware has let them down. State-of-the-art linear solvers often utilize &lt; 1% of available compute throughput on existing architectures such as CPUs and GPUs. There are many different algorithms used to solve sparse linear systems. These algorithms are diverse and often have very different computational bottlenecks. These include low arithmetic intensity, fine-grained parallellism, tight dependences, and sparsity-induced load imbalance. This thesis studies the problem of designing hardware accelerators for sparse linear solvers. We propose three novel architectures that explore different parts of the design space. The accelerators exploit static sparsity as the basis of novel hardware-software co-designed scheduling approaches. First, we introduce Spatula, an architecture designed to accelerate direct solvers. Then, we propose Azul, a hardware accelerator targeted at iterative solvers. Taken together, Spatula and Azul demonstrate significant speedups on both of the main classes of sparse linear solver algorithms. Finally, to show that our techniques are useful for end-to-end applications, we present Ōmeteōtl, an accelerator targeted at applications that use iterative solvers in their inner loop. Ōmeteōtl also shows that the techniques in this thesis generalize to sparse matrix computations beyond linear solvers. These accelerators deliver order-of-magnitude speedups over state-of-the-art GPU baselines, achieving &gt; 100× speedups on many inputs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164057</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-Optimized Design of 3D Shapes with Part-Based Control</title>
<link>https://hdl.handle.net/1721.1/164056</link>
<description>Physics-Optimized Design of 3D Shapes with Part-Based Control
Zhan, Sean
We introduce PhysiOPart, a computational approach for rapid generative design of 3D objects optimized for physical integrity. PhysiOPart enables users to edit and combine object parts to explore a vast design space. To model continuous surfaces of arbitrary resolution without topology restrictions, we parametrize parts with neural implicit representations. However, when parts are assembled to form an object, the resulting geometry is not guaranteed to be functional. Existing generative modeling approaches use task-specific neural predictors to approximate physical behaviors with limited accuracy. We propose an end-to-end differentiable physics simulation pipeline that performs linear static analysis to optimize for user-specified objectives, leveraging learned geometry priors. Our part-based formulation with finite element method is highly customizable, allowing for user-defined per-part materials, loads, and boundary conditions. The optimized designs exhibit improved physical behavior and can be fabricated.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164056</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Assembly of Curved Structures from Flat Configuration</title>
<link>https://hdl.handle.net/1721.1/164055</link>
<description>Fast Assembly of Curved Structures from Flat Configuration
Zaman, Akib
Imagine deploying an emergency shelter that transitions seamlessly from a flat configuration to a lifted structure, or a folded robot that is sent through a tunnel and subsequently activated to expand into a larger form at the endpoint, with a single, collective pull of strings. This scenario raises two critical questions: (i) how to decompose the structure into a flat state that encodes the 3D geometry, and (ii) where to place strings through the unit modules to achieve complete actuation. Although these questions have been explored individually, comprehensive solutions remain scarce. To address this challenge, this thesis presents a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. Target structures are decomposed into rigid, spatially varied quad tiles optimized to approximate a user-provided surface, forming a flat mechanical linkage. A two-step algorithm is then applied to determine a physically realizable string path that controls only a subset of tiles, enabling smooth actuation from flat to assembled configuration. First, the minimal subset of tiles required for string control is computed by considering both the structure’s geometry and inter-tile interactions. Second, a valid string path is identified through these tiles that minimizes friction, thereby transforming the flat linkage into the target 3D form upon tightening a single string. The resulting designs can be manufactured in flat form using computational fabrication techniques: such as 3D printing, CNC milling, or molding, thereby simplifying both production and transportation. Validation is provided through a series of physical prototypes and application case studies, ranging from medical devices and space shelters to large-scale architectural installations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164055</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Foundations for Learning in Games and Dynamic Environments</title>
<link>https://hdl.handle.net/1721.1/164054</link>
<description>Theoretical Foundations for Learning in Games and Dynamic Environments
Golowich, Noah
Decision-making problems lie at the heart of numerous aspects of human and algorithmic behavior across our society, ranging from healthcare systems to financial systems to interactions with the physical world. A central challenge that arises across many decision-making problems is the presence of multiple agents, often with competing incentives. To understand how agents will act in such situations, it is often productive to compute equilibria, which have the property that no agent can deviate from them and improve their utility. An additional challenge is that decisions made by agents often change the state of the environment, which is modeled as dynamic. Thus, we need efficient algorithms for learning good policies, which tell the agent what to do as a function of the environment’s state. Extensive work spanning multiple domains such as economics, computer science, and statistics has been developed to model these decision-making problems. This has led to many celebrated results, which include, for instance, a considerable body of work studying the computational properties of Nash equilibria in normal-form games, and a long line of papers on reinforcement learning. However, many of these classical works suffer from a few shortcomings: first, they often do not account for the enormous state or action spaces available to agents in realistic decision-making settings, and second, many of them do not derive computationally efficient algorithms for the desired solution concepts. These shortcomings are brought to the forefront by the remarkable recent progress in artificial intelligence, which holds promise for solving decision-making problems with enormous state or action spaces but which is often bottlenecked by computation. The objective of this thesis is to develop theoretical foundations for the computational aspects of such decision-making problems: e.g., How do we efficiently compute equilibria in large games?, and: How can we efficiently learn near-optimal policies in complex environments? Some highlights of our results are listed below—first, we study problems in which there are multiple agents and the goal is to compute some notion of equilibrium: • We show the first near-optimal rate of convergence to equilibrium for a no-regret learning algorithm in normal-form games, resolving a decade-long line of work which had aimed to establish increasingly better rates. • We establish the first algorithm with sublinear swap regret against arbitrary adversaries enjoying only polylogarithmic dependence on the number of actions, resolving a question of Blum and Mansour from 2007. • As a corollary of the preceding result, we obtain the first polynomial-time algorithm for approximating a correlated equilibrium in extensive-form games (to constant approximation error), addressing a question of von Stengel &amp; Forges from 2008. Additionally we obtain near-optimal bounds on the communication and query complexity of approximating correlated equilibria in normal-form games (to constant approximation error), addressing several open problems in the literature. • We give the first algorithm for the sequential calibration problem with calibration error beating that of the seminal work of Foster &amp; Vohra from 1998. Moving on to decision-making problems where the environment is modeled as dynamic (typically studied in the framework of reinforcement learning (RL)), our results include the following: • We give the first end-to-end computationally efficient algorithms for learning a nearoptimal policy in many fundamental reinforcement learning problems, such as those of (constant-action) Linear Bellman Complete MDPs and sparse linear MDPs. • We give the first quasi-polynomial time algorithm for finding a near-optimal policy in a general and well-motivated class of partially observable RL environments, and show that our bound is tight. • We prove some (perhaps surprising) hardness results that arise in multi-agent RL problems. For instance, we show that it is computationally hard to implement noregret learning algorithms in multi-agent RL environments even when the agents can coordinate on their choice of algorithm, which creates a stark contrast with simpler multi-agent learning settings (e.g., in normal-form games) where no-regret learning has formed the bedrock for a wide array of developments over the last several decades. • Nevertheless, we show that by adjusting the type of equilibrium appropriately, we can circumvent the above hardness results and derive computationally efficient decentralized algorithms for computing equilibria in multi-agent RL environments. Many of the above results have inspired follow-up work which includes applications of our results to various problems in game theory, reinforcement learning, online learning, and related domains, as well as the formulation of new problems which are inspired by the above results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164054</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing human vision through large-scale brain imaging and computational models</title>
<link>https://hdl.handle.net/1721.1/164053</link>
<description>Characterizing human vision through large-scale brain imaging and computational models
Lahner, Benjamin
Efforts to understand the neural underpinnings of human visual processing require sufficient experimental data and robust models. This thesis significantly contributes to both these fronts while simultaneously elucidating some of the most intriguing aspects of the human visual system. In the first chapter, I use a combination of classical machine learning, artificial neural networks, and a joint MEG/fMRI neuroimaging dataset to reveal that the human visual system extensively processes highly memorable images in regions distributed throughout visual cortex late in time. In the second chapter, I present the BOLD Moments Dataset, a large-scale fMRI dataset using short video stimuli to extend computational models of visual processing into the video domain to better understand how humans process visual content unfolding over time. The last chapter introduces a fMRI dataset aggregation framework titled MOSAIC to achieve the scale and stimulus diversity needed for training modern neural networks directly on brain responses. This body of work exemplifies how large-scale experimental data and artificial neural networks can contribute towards a robust and generalizable understanding of human visual processing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164053</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wireless Systems for a Sustainable Future: From Battery-Free Subsea IoT to THz-Based Agriculture Monitoring</title>
<link>https://hdl.handle.net/1721.1/164052</link>
<description>Wireless Systems for a Sustainable Future: From Battery-Free Subsea IoT to THz-Based Agriculture Monitoring
Afzal, Sayed Saad
This thesis describes how wireless sensing can drive significant advancements in climate and sustainability. Specifically, it shows how we can leverage diverse signals—acoustics, ultrasound, THz, and optics— in unconventional ways to unlock new capabilities in underwater climate monitoring, food safety, and disaster response. The thesis introduces two novel technologies. The first technology enables long-term, ultra-low power ocean sensor networks for use in climate modeling, marine monitoring, and sustainable aquaculture. Unlike existing IoT technologies – like Bluetooth, WiFi, and GPS – which cannot work underwater, we design and implement an ultra-low power subsea backscatter communication system, enabling battery-free underwater imaging, sensing and localization. Second, the thesis describes a new technology that can support sustainability in agriculture through real-time food quality assessment that reduces food waste. In contrast to existing food quality technologies that require direct contact with produce, we introduce a new wireless system for accurate, non-invasive sensing using sub-THz signals. We describe the design, implementation, and evaluation of multiple systems that leverage these technologies to monitor the ocean and food waste: First, we present a ultra-wideband metamaterial sensor design that facilitates scalable, and long-range battery-free underwater communication. Next, we describe a system that can push the throughput of this technology using higher order modulation. Beyond building sensor networks, we demonstrate their real-world potential through two systems: one for underwater localization that uses rich spatio-temporal-spectral features for accurate positioning, and another for battery-free imaging that fuses acoustic and optical signals to capture color images in the dark. Finally, we present a novel solution for accurate fruit ripeness sensing using sub-terahertz wireless signals. These systems unlock new IoT applications in climate modeling, aquaculture, robotics, and agriculture.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164052</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Integration and Differentiation of Probabilistic Programs</title>
<link>https://hdl.handle.net/1721.1/164051</link>
<description>Automatic Integration and Differentiation of Probabilistic Programs
Lew, Alex K.
This thesis addresses the challenge of automating fundamental operations from probability theory and calculus on probability distributions defined by higher-order probabilistic programs. It does this by developing a suite of composable program transformations for an expressive core calculus for probabilistic programming: • Integration: Compiling a probabilistic program into a deterministic representation of its expectation operator, handling potentially intractable integrals symbolically. • Unbiased estimation: Transforming programs involving intractable operations (like integration) into runnable probabilistic programs that yield provably unbiased estimates of the original value, with flexible levers for users to navigate cost-variance trade-offs. • Radon-Nikodym differentiation: Compiling probabilistic programs into implementations of a novel interface for the unbiased estimation of density ratios, of the sort that arise in Monte Carlo and variational inference. • Differentiation: Extending automatic differentiation (AD) to compose with the above transformations, enabling the optimization of expected values and density ratios of probabilistic programs. These transformations operate on an expressive higher-order probabilistic programming language and are proven correct using denotational semantics and logical relations. The resulting framework enables the sound and automated implementation of a wide range of algorithms for probabilistic inference and learning. To demonstrate the practical value of these techniques, we use them to implement three systems for scalable probabilistic inference in different domains: (1) extensions to the Gen probabilistic programming system that accelerate and automate a broad range of Monte Carlo and variational inference algorithms, (2) the PClean system for automated Bayesian reasoning about relational data, and (3) the GenLM system for controllable generation from language models. We find that our techniques enable these systems to scale to a variety of complex, real-world problems, and to achieve state-of-the-art performance on a range of benchmarks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164051</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimizer-space computation</title>
<link>https://hdl.handle.net/1721.1/164050</link>
<description>Minimizer-space computation
Ekim, Barış C.
As the volume of DNA sequencing data increases, the need for algorithmic advances to efficiently handle the data arises. One such concept is minimizers, which are genomic substrings that allow for reduced representations of larger DNA sequences. In this thesis, we introduce minimizer-space computation as a new algorithmic paradigm for DNA sequence analysis. Instead of DNA nucleotides, we consider minimizers as the letters of an extended alphabet in which algorithms operate. We present several techniques on how to efficiently construct these extended alphabets, demonstrate how to develop approaches that use these alphabets and consequently use only a fraction of sequence data, and show how fundamental biological tasks, such as genome assembly and read mapping, can be significantly accelerated over state-of-the-art methods.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164050</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performant and Resilient Service Composition for Modern Cloud Applications</title>
<link>https://hdl.handle.net/1721.1/164049</link>
<description>Performant and Resilient Service Composition for Modern Cloud Applications
Li, Tianyu
Modern cloud applications are often distributed systems composed from vendor-provided building blocks (e.g., object storage services, container orchestration services). Consequently, distributed fault-tolerance is a central concern for application correctness. Although each building block may offer individual fault-tolerance, the end-to-end application is still susceptible to failures, because the composition logic that orchestrates them may still fail. This thesis explores resilient composition, a systematic way to assemble fault-tolerant components into resilient end-to-end distributed applications. We begin by presenting the fail-restart system model, which captures the unique fault-tolerance challenges that arise when composing services. Based on this model, we define Composable Resilient Steps (CReSt), an atomic programming abstraction that guarantees fault-tolerance across the assembled application. We then detail efficient methods for implementing CReSt using a range of database techniques, and a novel distributed protocol that allow optimistic, speculative execution ahead of slower fault-tolerance safeguards. Together, these pieces allow developers to assemble fault-tolerant distributed systems that are correct by construction and often more performant than existing solutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164049</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Succinct Cryptography via Propositional Proofs</title>
<link>https://hdl.handle.net/1721.1/164048</link>
<description>Succinct Cryptography via Propositional Proofs
Mathialagan, Surya
The goal in modern cryptography is to obtain security while minimizing the use of computational resources. In recent years, we have been incredibly successful in our pursuit for efficiency, even for cryptographic tasks that were thought to be “science fiction”. For example, we have constructions of fully homomorphic encryption and private information retrieval from standard, cryptographic assumptions which achieve the ideal levels of succinctness. However, there are still some tasks in cryptography where achieving the “ideal” efficiency from standard assumptions has evaded us. In this thesis, we study the problem of achieving succinctness in two such settings: • Can we construct succinct indistinguishability obfuscation (IO) for Turing machines? In particular, can we construct an obfuscated program whose size is independent of the input length? • Can we construct succinct non-interactive arguments (SNARGs) for all of NP? While the problems seem unrelated at first glance, the root difficulty seems to stem from a similar place: both primitives have non-falsifiable security definitions. In fact, this type of barrier exists for many other cryptographic primitives, including witness encryption. This leads to a central question which we refer to as the “non-falsifiability barrier”: how can we construct non-falsifiable primitives from falsifiable assumptions? In this thesis, we show how to leverage propositional proofs to overcome the non-falsifiability barrier, and make substantial progress in the goal of achieving succinctness in both settings. Our main result is universal construction of both SNARGs and succinct IO for Turing machines from standard assumptions using propositional proofs. We then show several applications, including rate-1 IO for many programs, the first succinct secret sharin schemes for monotone circuits, and many more. Our results establish propositional proofs as a foundational tool for achieving succinctness across a broad range of cryptographic settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164048</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical and Algorithmic Thresholds in Spin Glasses</title>
<link>https://hdl.handle.net/1721.1/164047</link>
<description>Statistical and Algorithmic Thresholds in Spin Glasses
Huang, Brice
This thesis studies spin glasses, disordered complex systems originating in statistical physics. Such systems model optimization, sampling, and inference problems from probability and statistics, which are of fundamental importance to modern data science. In particular, spin glasses provide natural examples of random, high-dimensional, and often highly non-convex cost or log-likelihood functions, making them an excellent testing ground for such questions. Part I of this thesis studies statistical properties of these models. Chapter 2 identifies the storage capacity of the Ising perceptron, a simple model of a neural network, subject to a numerical condition. This gives a conditional proof of a 1989 conjecture of Krauth and M´ezard. Chapter 3 gives a new proof of the celebrated Parisi formula for the free energy of the spherical mean-field spin glass, which was first proved by Talagrand and in more generality by Panchenko. Our proof takes a simpler modular approach, drawing on recent advances in spin glass free energy landscapes due to Subag. Chapter 4 characterizes the topology trivialization phase transition of multi-species spherical spin glasses and shows that lowtemperature Langevin dynamics finds the ground state in the topologically trivial regime; the latter result is new even in the single-species setting. Part II of this thesis concerns algorithms for optimization and sampling problems on spin glasses. Chapter 5 studies the problem of optimizing the Hamiltonian of a multi-species spherical spin glass. Our main result exactly characterizes the maximum value attainable by a class of algorithms that are suitably Lipschitz in the disorder. This class includes gradient-based algorithms and Langevin dynamics on constant time scales, and in particular includes the best algorithm known for this problem. This chapter is part of a series of works where we establish exact algorithmic thresholds using the branching overlap gap property (OGP), a landscape property introduced in our earlier work (which appears in our S.M. thesis). In this chapter, we develop a more robust way to establish the branching OGP that does not require Guerra’s interpolation; this allows our method to be applied to models well beyond the (single-species) mean-field spin glass we previously considered. Chapters 6 and 7 study sampling from the Gibbs measure of a spherical mean-field spin glass. Chapter 6 develops a sampling algorithm based on simulating Eldan’s stochastic localization scheme, while Chapter 7 analyzes simulated annealing of Langevin dynamics. We prove both algorithms succeed for inverse temperatures up to a stochastic localization threshold. Chapter 6 gives the first stochastic localization-based sampler with a guarantee of vanishing total variation error, improving on earlier algorithms with vanishing Wasserstein error. Chapter 7 provides the first provable guarantees for a Markov chain in this model beyond the uniqueness threshold, where mixing from worst-case initialization is provably slow.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164047</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Cavity-Coupled Rydberg Atom Array for Quantum Science and Quantum Computing</title>
<link>https://hdl.handle.net/1721.1/164046</link>
<description>A Cavity-Coupled Rydberg Atom Array for Quantum Science and Quantum Computing
Hu, Beili
Neutral atom arrays have rapidly emerged as a leading platform for quantum computing, boasting scalable, configurable arrays of single atoms trapped in optical tweezers, fast, high-fidelity entangling gates through Rydberg interactions, and programmable, parallelized control of qubit operations. Coupling an atom array to an optical cavity opens a new frontier. Leveraging enhanced light-atom interactions in cavity quantum electrodynamics, cavity- coupled atom arrays acquire capabilities that can further expand the neutral atom toolbox, including cavity-enhanced atom readouts, atom-photon entanglement, and photon-mediated interactions between distant atoms.&#13;
&#13;
This thesis presents a quantum hardware platform that integrates an array of neutral atoms with a high-finesse optical cavity. After describing the design and development of the experimental apparatus, I demonstrate high-fidelity atom state readout through the cavity, achieving improved speed and atom survival compared to conventional free-space imaging methods. I then introduce a new technique for selectively controlling atom-cavity coupling on arbitrary subsets of the array, using local AC Stark shifts on the excited states of the atoms. Building on these tools, I demonstrate fast, non-destructive cavity-based readout of atom arrays, a crucial bottleneck of atom array platforms. I also showcase real-time measurement and feedback capabilities with a demonstration of classical error correction, using a register of atomic bits. Finally, I describe progress toward implementing single- and two-qubit gates within the cavity-coupled system. By combining coherent control, tunable interactions, and high-fidelity, non-destructive readout integrated and real-time feedback, the cavity-coupled Rydberg atom array offers a promising path toward fault-tolerant quantum computing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164046</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation, Prediction and Counterfactual Inference with Dependent Observations</title>
<link>https://hdl.handle.net/1721.1/164045</link>
<description>Estimation, Prediction and Counterfactual Inference with Dependent Observations
Kandiros, Anthimos Vardis
The success of modern data science is largely driven by access to large-scale, high dimensional data. Much of classical machine learning has been developed under the assumption that this data is generated independently from some distribution. However, this assumption is often violated when data exhibit complex dependencies across a spatial or temporal domain, or due to social interactions. In this thesis, our goal is to design and analyze methods that address these dependencies for performing three fundamental estimation tasks: unsupervised learning, supervised learning and counterfactual inference. In supervised learning, we observe a sequence of unlabeled examples and our goal is to infer some structural property from the distribution they came from. The presence of dependencies could severely complicate this question. Our results in this direction encompass both fully observable as well as latent variable models. For fully observable models, we use the celebrated Ising model to describe the dependencies. Assuming we have access to a single sample from some Ising model, which captures a variety of real-world scenarios, we design and analyze polynomial time algorithms for recovering the matrix corresponding to the network structure of the model. We then leverage these techniques to obtain improved guarantees for estimating Ising models in Total Variation (TV) distance from multiple samples. For latent variable models, we focus on the case where the structure is a tree and we get samples from the leaves, which is a common scenario in phylogenetics. Assuming the model is Gaussian, we analyze the behavior of the Expectation-Maximization (EM) algorithm, a popular heuristic for latent variable models. We show that for trees with a single latent node, EM converges to the true model and for general tree topologies, the only stationary point in the interior of the domain is the true model. We then shift our focus to discrete models and study latent tree Ising models, for which we provide polynomial time algorithms for learning the distribution of leaves in TV distance. In supervised learning, we observe a sequence of feature-label pairs and our task is to learn the predictive relationship between the features and the labels. Here, this relationship could be confounded by the presence of dependencies among labels. We formulate this question as a regression problem, where the labels of the units follow the joint distribution of an Ising model with an unknown strength parameter and external fields that are determined by the regression function. We characterize the minimax optimal rate of estimation for the various parameters and provide an efficient algorithm that achieves it. Interestingly, it might not be possible to estimate all the parameters in some cases. In counterfactual inference, we focus on the design of network experiments, where the treatment of a unit could affect the outcome of a neighboring unit in an underlying graph. Our goal is to estimate a general causal effect that can be defined as the average difference in outcomes for a unit under two different interventions. For an arbitrary such effect, we propose an experimental design, called the conflict graph design. For an unbiased estimator of that effect, we prove bounds on its variance that yield the best known rates of estimation for various effects studied in the literature, such as the average direct effect and the total effect, but also provide estimation rates for effects that have received less attention from the perspective of experimental design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164045</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardening Trusted Execution Environments Against Microarchitectural Side-Channel Attacks: A Constructive Approach</title>
<link>https://hdl.handle.net/1721.1/164044</link>
<description>Hardening Trusted Execution Environments Against Microarchitectural Side-Channel Attacks: A Constructive Approach
Dréan, Jules Guillaume Jacques Bénony D
Trusted Execution Environments (TEEs) [1–5] promised to enable secure computation even in the presence of privileged adversaries by providing hardware-enforced isolation. However, the discovery of microarchitectural side-channel and transient execution attacks [6–10] has severely undermined these security guarantees. These attacks exploit shared hardware resources and speculative execution to leak sensitive information across security boundaries, effectively bypassing the architectural isolation enforced by TEEs. The widespread impact of these vulnerabilities is evidenced by more than 43 published attacks [11] targeting commercial TEE platforms including Intel SGX, AMD-SEV, and ARM TrustZone. Existing approaches to defend against these attacks face significant limitations. Hardware-based solutions [12–14] often require complex processor modifications with significant hardware overhead. Replacing trusted hardware with cryptographic approaches incurs prohibitive performance overheads [15]. Meanwhile, formal verification methods struggle to scale to realistic code base sizes and often fail to capture subtle microarchitectural behaviors [16–18]. This thesis proposes a constructive approach to TEE security and demonstrates that practical defenses against microarchitectural attacks are achievable through careful system design. Rather than relying only on models and simulations, we focus on constructing systems that are secure by design. Our work is concretely realized through the design, implementation, and evaluation of two novel platforms: First, we present Citadel, a TEE platform that enables secure shared memory while providing precise guarantees against microarchitectural side-channel attacks. Citadel introduces relaxed microarchitectural isolation (RMI), a novel security property that allows programs to share memory while restricting information leakage to that of a non-speculative execution. To achieve RMI, Citadel combines hardware-enforced microarchitectural isolation with two simple mechanisms for controlled speculation: SpecSafe, which prevents speculative shared-memory accesses entirely, and Burst mode, which enables better performance through constrained speculation on small code snippets. Through a fully functional FPGA prototype, we demonstrate that Citadel can run real-world applications including cryptographic libraries and private ML inference with less than 5% overhead while maintaining strong security guarantees. Second, we develop Argos, an “integrity-only” TEE specifically designed for verifiable fully homomorphic encryption, that enables the deployment of FHE schemes in real-world settings where malicious security is required. We show that by carefully constraining the attack surface and employing simple hardware mechanisms, we can achieve complete security against microarchitectural attacks. Argos introduces a simplified transcript-based attestation scheme that only requires one signature per FHE computation, amortizing the cost of relying on a physical TPM to microarchitecturally isolate secrets. Argos can be used to not only enforce circuit-level integrity of FHE schemes but can also be extended to support more complex FHE-based applications that take (potentially poisoned) input from the (malicious) circuit evaluator. Argos is compatible with commodity hardware and only incurs minimal performance overhead with an average of 3% overhead for FHE evaluation and 8% overhead for complex protocols. Through these systems, we show that effective defenses can be built against microarchitectural side channel and transient execution attacks. Our constructive approach yields practical systems that are secure by design while maintaining efficiency and usability. This thesis opens new possibilities for the deployment of trusted hardware by demonstrating concrete paths toward robust microarchitectural security.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164044</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steering Robots with Inference-Time Interactions</title>
<link>https://hdl.handle.net/1721.1/164043</link>
<description>Steering Robots with Inference-Time Interactions
Wang, Yanwei
Imitation learning has driven the development of generalist policies capable of autonomously solving multiple tasks. However, when a pretrained policy makes errors during deployment, there are limited mechanisms for users to correct its behavior. While collecting additional data for finetuning can address such issues, doing so for each downstream use case is inefficient at deployment. My research proposes an alternative: keeping pretrained policies frozen as a fixed skill repertoire while allowing user interactions to guide behavior generation toward user preferences at inference time. By making pretrained policies steerable, users can help correct policy errors when the model struggles to generalize—without needing to finetune the policy. Specifically, I propose (1) inference-time steering, which leverages user interactions to switch between discrete skills, and (2) task and motion imitation, which enables user interactions to edit continuous motions while satisfying task constraints defined by discrete symbolic plans. These frameworks correct misaligned policy predictions without requiring additional training, maximizing the utility of pretrained models while achieving inference-time user objectives.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164043</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Deep Learning Efficiency: From Specialized Co-Design to Automated Generation</title>
<link>https://hdl.handle.net/1721.1/164042</link>
<description>Advancing Deep Learning Efficiency: From Specialized Co-Design to Automated Generation
Lin, Yujun
The explosive growth of artificial intelligence (AI) technologies, particularly large-scale deep learning models such as large language models and diffusion models, has intensified the demand for efficient full-stack inference solutions that effectively balance performance and costs. This work will present a comprehensive exploration into the algorithm-system co-optimization, hardware design specialization and automation for scalable AI deployment. First, we begin with algorithmic optimization for large-scale models, including large language models and diffusion models, developing inference libraries that leverage quantization to boost the performance of generative AIs on existing GPU platforms. Next, we design specialized hardware accelerators for domain-specific applications, specifically point cloud understanding, emphasizing efficiency improvements through the exploitation of data sparsity. Finally, we open up the hardware design space beyond template-based sizing, and progress into the automated learning-based co-design of neural network and hardware architectures, maximizing their synergy with a full-stack joint optimization. We then introduce an automated framework for spatial accelerator generation, transforming high-level mappings into custom hardware designs that support scalable deployment. Together, these contributions advance AI inference efficiency by bridging the gap between advanced computational requirements and hardware capabilities, between theoretical potential and practical solutions, and between design cost and effectiveness.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164042</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Theoretic Foundations for Understanding Quantum Systems</title>
<link>https://hdl.handle.net/1721.1/164041</link>
<description>Learning Theoretic Foundations for Understanding Quantum Systems
Liu, Allen
Understanding and harnessing the power of quantum systems has the potential to transform many domains in science and technology. However, before we can achieve these aspirations, we must first build a better understanding of how quantum systems fundamentally behave. In this thesis, we approach this question through the lens of learning theory to develop new paradigms for learning about quantum systems and understanding their structural properties. We deliver several surprising results, upending previous beliefs about even fundamental laws and giving provably efficient algorithms for learning about quantum systems in settings previously conjectured to be intractable. Typically in quantum many-body systems, the particles in the system interact locally with respect to some geometry as described by a local Hamiltonian. Two key questions are first, understanding equilibrium properties of a system with a given Hamiltonian and second, recovering the Hamiltonian from measurements of the properties of the system. For the first, we prove a universal law that there is a sudden death of entanglement, at a critical temperature depending only on the geometry but not on the system size. For the second, we give the first efficient algorithm for recovering the Hamiltonian at any temperature, breaking a conjectured barrier at low temperatures. Beyond systems with local interactions, we also consider learning and testing properties of general quantum states, focusing on the interplay between statistical complexity and near-term quantum device constraints, only allowing for entangled measurements over a limited number of copies of the state. We characterize the optimal rates for learning and testing with single-copy measurements and for multi-copy measurements in many relevant near-term regimes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164041</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Domain Wall Based Magnonics in Iron Garnet</title>
<link>https://hdl.handle.net/1721.1/164040</link>
<description>Domain Wall Based Magnonics in Iron Garnet
Gross, Miela J.
Magnonic devices leverage magnons, quantized spin waves, as the mechanism to process and transfer information. In materials with low Gilbert damping, these spin wave-based systems enable ultra-fast operation while eliminating thermal heating and leakage currents inherent to conventional electron-based microelectronics. To maximize energy efficiency and processing speed, materials like iron garnets, ferrimagnetic insulators with tunable magnetic properties, are essential. Key magnetic parameters, including saturation magnetization, perpendicular magnetic anisotropy, coercivity, and Gilbert damping, can be tailored through elemental substitution or strain engineering in thin films. Furthermore, relativistic domain wall velocities reported in yttrium iron garnet (YIG), bismuth substituted YIG, and thulium iron garnet lay the foundations for high-speed operation. These unique attributes position garnets as ideal materials for the development of magnonic devices that integrate efficiency, speed, and versatility. This thesis presents my research on integrating thin film garnets into a domain wall based magnonic devices. It begins by exploring the magnetic characterization of thin film iron garnets, including the growth process, temperature dependent magnetic behavior, and tunable magnetic anisotropy. Next, we report on magnonics within the garnet, focusing on the interactions between spin waves and domain walls. Finally, we demonstrate a write mechanism for a magnonic device driven by spin wave-induced domain wall motion, providing detailed characterization of the device behavior and performance. These results underscore the potential of iron garnets for magnonic-based device applications and offer insights into the efficiency of write mechanism, paving the way for energy-efficient high-speed spintronic technologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164040</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating Inhomogeneity in High-Field MRI Excitations: Arbitrary Waveform Optimization and Multiphoton Parallel Transmission (MP-pTx)</title>
<link>https://hdl.handle.net/1721.1/164039</link>
<description>Mitigating Inhomogeneity in High-Field MRI Excitations: Arbitrary Waveform Optimization and Multiphoton Parallel Transmission (MP-pTx)
Drago, John M.
High-field magnetic resonance imaging (MRI) using a standard volume coil results in a spatially varying flip angle across the body, which renders images difficult to clinically interpret. This arises from the complex interactions of electromagnetic fields from current-carrying elements surrounding the imaging region. Parallel transmission (pTx) mitigates this issue by employing multiple high-power, independently controlled transmit elements for more precise excitation control. However, since the wavelength of the applied radio waves is shortened in tissue, the effect becomes highly dependent on the patient’s anatomy. As a result, optimization must be performed on a patient-by-patient basis, and methods that attempt full control of these independent waveforms are too computationally intensive to execute during the limited examination time. Additionally, the high-field excitations create complex electric field distributions that require control and careful monitoring to avoid excessive tissue power deposition (and ultimately heating), quantified as the specific absorption rate (SAR). To address these challenges, we introduce a method for optimizing patient-specific pulses using a global waveform (Ritz) approach, enabling rapid, in-scanner optimization. While pTx effectively addresses flip angle inhomogeneity, it remains costly and introduces challenges in SAR management. We address the SAR management and cost problems of pTx by introducing and characterizing the MP-pTx method, which leverages the multiphoton phenomenon to improve homogeneity using a standard volume coil supplemented with low-frequency (kilohertz) parallel channels. MP-pTx reduces costs and simplifies SAR management by shifting the parallel irradiation to low-cost, lowSAR shim array channels. These channels supplement an off-resonant excitation from a conventional birdcage coil with an oscillating, z-directed field that satisfies the resonance condition for spin state transitions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164039</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Latent Motion Planning and Reinforcement Learning for Legged Locomotion</title>
<link>https://hdl.handle.net/1721.1/164038</link>
<description>Generative Latent Motion Planning and Reinforcement Learning for Legged Locomotion
Miller, Adam Joseph
In recent years, reinforcement learning has demonstrated its promise as a powerful tool for developing innovative and advanced control systems for legged robots. The method’s robustness, versatility, and generality have made it a prime candidate for future robotic systems deployed in the real world. Through the development of more advanced machine learning algorithms and more reliable and efficient physics simulators, reinforcement learning continues to improve and enable new, dynamic, and agile capabilities. While the results are often impressive and the tools relatively beginner-friendly, there remain impediments to scalable and reliable progress. Poor reward function scaling, challenges balancing exploration versus exploitation, and misalignment from the engineer’s intent are roadblocks to better performance. To get beyond these limitations, new tools and frameworks are necessary. In this work, I present novel methods to address these challenges and extend the capabilities of reinforcement learning on robot hardware. Through the quantification of the distributional sim-to-real gap, simulation model optimization for hardware matching, latent space motion sequence planning, and latent style training, I demonstrate never-before-seen performance on legged hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164038</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Weak Supervision: Theory, Methods, and Applications</title>
<link>https://hdl.handle.net/1721.1/164037</link>
<description>Learning from Weak Supervision: Theory, Methods, and Applications
Lang, Hunter
The growing demand for high-quality labeled data to train machine learning models has driven widespread adoption of weak supervision and synthetic data methods, which use automated models instead of humans for annotation. Large language models (LLMs) have further accelerated this trend because their zero- and few-shot classification performance enables them to serve as effective “synthetic annotators” for various tasks. In practice, the data generated by these weak annotators is imperfect, but it enables the training of strong models. However, theoretical understanding of why training one model on the outputs of another leads to strong performance remains limited, especially when the annotator model exhibits suboptimal performance on the target task. In this thesis, I develop a theoretical framework for learning from weak supervision that captures the key aspects of the problem better than existing approaches in the crowdsourcing and learning-with-noisy-label literature. This framework establishes structural conditions that explain when and why weak supervision can reliably train strong models. Building on these theoretical results, the second part of the thesis introduces methods to improve how models learn from weak supervision and applies these methods to low-labeled-data settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164037</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-fidelity Optimal Trajectory Generation: Optimal Experiment Design for Robot Learning</title>
<link>https://hdl.handle.net/1721.1/164036</link>
<description>Multi-fidelity Optimal Trajectory Generation: Optimal Experiment Design for Robot Learning
Ryou, Gilhyun
Data-driven methods have significantly advanced robot learning, yet their direct application to real-world robots remains challenging, particularly under extreme conditions. This challenge is especially pronounced for highly maneuverable vehicles like quadrotor aircraft, which often operate in scenarios requiring rapid maneuvering, such as racing, defense systems, or safety-critical obstacle avoidance. In such extreme conditions, real-world constraints like control delays, state estimation errors, and battery voltage fluctuations often compromise trajectory reliability, even when conforming to ideal dynamics. However, the typical data-driven methods are usually developed in simulated environments. Consequently, the transition to real-world dynamics requires extensive fine-tuning, which can be risky, as perfect training in simulations does not guarantee safe transitions to real-world dynamics. This thesis employs methods from optimal experiment design to address these challenges. By quantifying uncertainty and maximizing information gain, the approach aims to safely and efficiently design the real-world experiments required for accurate constraint modeling. In the first chapter, we present a multi-fidelity Bayesian optimization method that searches for time-optimal speed profiles for quadrotor aircraft, effectively balancing numerical simulations with real-world flight experiments. The second chapter extends the optimal experiment design method to a high-dimensional online planning problem through integration with reinforcement learning. The proposed algorithms, trained and validated through real-world flight experiments, significantly outperform baseline methods in trajectory time and computational efficiency. Additionally, these algorithms have been adapted to various planning problems, including fixed-wing aircraft planning, cooperative multi-drone systems, and energy-efficient trajectory generation.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164036</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Systems for Large-Scale Graph Representation Learning</title>
<link>https://hdl.handle.net/1721.1/164035</link>
<description>Efficient Systems for Large-Scale Graph Representation Learning
Huang, Tianhao
Graph representation learning has gained significant traction in critical domains including finance, social networks, and transportation systems due to its successful application to graphstructured data. Graph neural networks (GNNs), which integrate the power of deep learning with graph structures, have emerged as the leading methods in this field, delivering superior performance across diverse graph related tasks. However, training graph neural networks on large-scale datasets encounters scalability challenges on current system architectures. First, the sparse, non-localized structures of real-world graphs lead to inefficiencies in data sampling and movement. This characteristic heavily stresses system input/output (I/O), particularly burdening the peripheral buses during the sampling phase of GNN training. Second, the suboptimal mapping of training procedure to GPU kernels leads to compute inefficiencies, including substantial kernel orchestration overhead and redundant computations. Addressing these challenges requires a comprehensive, full-stack optimization approach that fully leverages hardware capabilities. This thesis presents two complementary works to achieve the goal. The first work, Hanoi, unblocks the data loading bottleneck in out-of-core GNN training by co-designing the sampling algorithms to align with the hierarchical memory organization of commodity hardware. Hanoi drastically reduces I/O traffic to external storage, delivering up to 4.2× speedup over strong baselines with negligible impacts on the model quality. Notably, Hanoi is able to obtain competitive performance close to in-memory training with only a fraction of memory requirements. Building on this foundation, the second work, Joestar, introduces a unified framework for optimized GNN training on GPUs. Joestar adapts the multistage sampling approach from Hanoi to in-memory training which frees CPUs from heavy data loading workloads. Joestar also identifies novel kernel fusion opportunities and formulates better execution schedules by jointly considering the sampling and compute stages. Combined with compiler infrastructure in PyTorch, Joestar achieves state-of-the-art GNN training throughputs for billion-edge graph datasets on a single GPU.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164035</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable Long-Horizon Robotic Manipulation under&#13;
Uncertainty and Partial Observability</title>
<link>https://hdl.handle.net/1721.1/164034</link>
<description>Generalizable Long-Horizon Robotic Manipulation under&#13;
Uncertainty and Partial Observability
Curtis, Aidan
A central goal in embodied artificial intelligence is to enable autonomous agents to accomplish complex, long-horizon tasks in novel, partially observable environments. In these scenarios, agents must effectively reason about uncertainty, generalize from limited experiences, and proactively plan actions to acquire missing information. This thesis tackles these core challenges by developing and evaluating novel methods specifically designed for partially observable contexts. The first part of this thesis introduces an enhanced heuristicguided planning technique that increases search efficiency in sparse-reward domains with significant uncertainty. Next, we investigate how symbolic reasoning can be integrated into the decision-making framework, accelerating search through the use of temporal and belief-space abstractions. Next, we propose a method for sequencing low-level reinforcement learning skills alongside information gathering actions, enabling increased task complexity and robustness in real-world tasks. Lastly, we show how large language models may be leveraged for few-shot model learning, allowing agents to rapidly adapt and generalize to new scenarios. The methods presented in this thesis advance the state-of-the-art in embodied AI by enabling robots to better handle uncertainty and incomplete information, ultimately paving the way for more capable, exploratory, and risk-aware autonomous systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164034</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems</title>
<link>https://hdl.handle.net/1721.1/164033</link>
<description>Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems
Zhang, Ziyu
The recent advancement of large language models (LLMs) and large multimodal models (LMMs) greatly enhances the capabilities of AI systems such as recommendation systems and coding assistants, making them more practical for real-world deployment. However, these models cannot directly interact with large volumes of data in a knowledge corpus during inference/task time due to inherent architectural limits and cost concerns. Encoding data into vector embeddings and leveraging approximate nearest neighbor search (ANNS) have thus become an important data processing primitive in AI systems following the introduction of retrievel-augmented generation (RAG). However, the complexity of tasks these AI systems aim to solve introduces challenges for existing ANNS algorithms. I developed methods to expand existing ANNS algorithms to address two such challenges: freshness and heterogeneity in the data.&#13;
&#13;
Graph-based ANNS algorithms have been proven to have superb cost versus approximation quality trade-off yet follow a simple intuition of best-first search. I focus on adapting graph-based ANNS algorithms to two settings featuring emerging challenges. (1) Data is updated constantly. Existing algorithms are inefficient under deletions and not robust against different orderings in the workload. I propose methods addressing these problems and developed an algorithm supporting updates effectively and efficiently based on Vamana, a state-of-the-art graph-based ANNS algorithm. (2) Data is heterogeneous in format, modality, and how they relate to a query, making the similarity difficult to capture by the canonical ANNS definition. I explore ways to model the similarity between heterogeneous sources and using graph-based ANNS approaches to perform semantic search in this setting. I test this approach under an end-to-end multimodal question-answering system developed in-house.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164033</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Abstractions for Robust Hierarchical Manipulation Planning</title>
<link>https://hdl.handle.net/1721.1/164032</link>
<description>Adaptive Abstractions for Robust Hierarchical Manipulation Planning
Noseworthy, Michael S.
In this thesis, we address the problem of long-horizon robotic manipulation under partial observability. Tasks such as gearbox assembly or tidying a workstation involve many objects and necessitate a variety of manipulation capabilities. These long-horizon tasks are commonly addressed by hierarchical approaches, which introduce state and action abstractions to make planning tractable. However, our abstractions often rely on imperfect models of the world, which can lead to brittle execution. Furthermore, these abstractions depend on having accurate state information, which is often only noisily sensed, if sensed at all. For example, in the assembly domain, the pose of each part may only be known within a few millimeters, and a box’s mass distribution may be completely unsensed. To deploy robots outside of structured environments like the factory, they will need to be robust to model misspecification and partial observability. The central idea of this thesis is that we can develop adaptive abstractions to improve the robustness of hierarchical planning once the robot is deployed. Adaptive abstractions incorporate observations from the real world that are informative about misspecifications and partial observability, essentially allowing the planner to adapt to its deployment environment. We explore this idea by developing three types of models that enable this adaptivity at different levels of the abstraction hierarchy: plan feasibility models, adaptive samplers, and reactive control policies. In our first contribution, we consider adding adaptivity to a task and motion planning system at the task-planning level. We focus on a setup where the robot has access to a set of parameterized skills, but these skills are derived from imperfect models. To enable robust planning, we propose to autonomously learn skill feasibility models once the robot is deployed through a curious exploration phase. Critically, we propose a novel active learning framework to enable efficient learning without human intervention. We show that the resulting feasibility model leads to robust task performance on multiple downstream tasks in a stacking domain. Our second contribution looks at developing adaptive samplers that can incorporate information about object state that is typically unobserved (e.g., inertial and frictional properties). General-purpose belief representations can handle this partial observability, but online inference is computationally expensive. Instead, we propose to use an offline phase to learn an inference network that directly predicts a distribution over object properties that is consistent with the interaction history. We show that inference networks enable efficient adaptation in a grasping domain with heavy objects. Our final contribution focuses on learning adaptive controllers such that robustness is handled at the lowest level of the abstraction. We consider precise contact-rich manipulation tasks that are sensitive to pose estimation errors. To overcome noisy poses at the control level, explorative contact is necessary, but unintended forces can lead to catastrophic outcomes such as part slippage or damage. We propose to use simulation in an offline phase to train reactive force-aware policies. The policies are trained to overcome pose uncertainty while using force-sensing to adaptively limit excessive forces. The result is robust real-world performance on the multistage assembly of a planetary gearbox system, which includes insertion, gear-meshing, and nut-threading tasks. In summary, adaptive abstractions can be used to increase the robustness of hierarchical manipulation planning, an important step in deploying robots outside of the lab or factory. Throughout the thesis, we validate the proposed approaches on the real robot in stacking and assembly domains.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164032</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Generalization Under Distribution Shift</title>
<link>https://hdl.handle.net/1721.1/164031</link>
<description>Methods for Generalization Under Distribution Shift
Netanyahu, Aviv
Machine learning systems have achieved remarkable performance in tasks where test data closely resembles the training distribution. However, real-world applications often require systems capable of handling more challenging situations -- specifically, adapting to new tasks and extrapolating to data points outside the distribution of the training set. The current paradigm for handling distribution shifts is collecting and training models on large datasets. This work offers two more principled frameworks that enable machine learning models to generalize effectively to out-of-distribution scenarios without sacrificing the power of modern overparameterized models.&#13;
&#13;
The first framework converts an out-of-support zero-shot generalization problem into an out-of-combination problem via a transductive reparameterization, which is possible under low-rank style conditions. We explore how this idea can be applied to domains like robotics, where the environment is changing, and materials and molecular design, where predicting properties of materials or molecules outside of known ranges is crucial to driving more efficient materials discovery.&#13;
&#13;
The second framework focuses on few-shot task learning, which involves agents learning new tasks from minimal data and applying them to new environments. We formulate the problem of few-shot task learning as Few-Shot Task Learning through Inverse Generative Modeling, which allows us to leverage the power of neural generative models pretrained on a set of base tasks. We adapt a method for efficient concept learning to few-shot task learning based on our formulation and rapidly learn new tasks with only a few examples, enabling task execution from autonomous driving to real-world robotic manipulation tasks in novel settings without the need for extensive retraining.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164031</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling 3D Scene Perception via Probabilistic Programming</title>
<link>https://hdl.handle.net/1721.1/164030</link>
<description>Scaling 3D Scene Perception via Probabilistic Programming
Gothoskar, Nishad
Understanding and interpreting the 3D structure of the world is a central challenge in artificial intelligence. Our physical world is 3D, yet our AI systems often “see” that world through pixels and images. In order to build truly intelligent AI systems, we must go beyond pixels and images and build 3D vision systems that can build meaningful and useful 3D representations of the world. This is the problem of 3D scene perception. How do we transform raw visual input into 3D representations of the world? 3D scene perception has numerous applications from robotics to augmented reality. Despite the advances over the last decade, 3D perception remains a major bottleneck in real-world robotics applications. The challenge stems from the immense variability in real-world conditions, e.g. lighting, color, viewpoint, camera properties, object appearance, the incompleteness of visual data due to limited resolution, noise, and occlusions, and the approximations in our models of visual data. Developing more robust and generalizable 3D perception systems would be an important step towards more general-purpose robotics. In this thesis, we explore a probabilistic architecture for 3D perception based on structured generative models and probabilistic programs. We begin with 3DP3, the first iteration of our approach, which infers 3D scene graphs from real-world depth image data. 3DP3 demonstrates that our method could work on real-world benchmarks and correct commonsense errors from deep learning systems. Building on this foundation, we develop Bayes3D, which scaled up these ideas using a GPU-accelerated image likelihood and generative model alongside a parallel coarse-to-fine inference algorithm. Next, we explore two approaches for incorporating RGB image data into generative 3D graphics programs, expanding their applicability. We then introduce DurableVS, which extends inverse-graphics techniques to model scenes involving a robot and multiple cameras, enabling precise control of a robot. Finally, we present Gen3D, which integrates all the key ideas from this thesis into a real-time 3D perception system that uses multi-resolution probabilistic models of 3D matter to enable real-time tracking that is competitive with vision transformers and 3D Gaussian splatting, state-of-the-art methods in computer vision and computer graphics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164030</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Generative Models for Visual Synthesis</title>
<link>https://hdl.handle.net/1721.1/164029</link>
<description>Efficient Generative Models for Visual Synthesis
Yin, Tianwei
While current visual generative models produce high-quality outputs, they suffer from significant computational costs and latency, limiting their applicability in interactive settings. In this dissertation, we introduce a suite of techniques designed to enhance the efficiency of generative models for image and video synthesis. First, we propose distribution matching distillation, a method that enables the training of one- or few-step visual generators by distilling knowledge from computationally expensive yet highly capable diffusion models. Next, we develop improved distillation techniques that enhance robustness and scalability, culminating in a production-grade few-step image generator. This system is now deployed in widely used software, generating hundreds of millions of images annually. Finally, we extend our approach to video generation by adopting an autoregressive paradigm, significantly reducing latency and enabling fast interactive video generation and world simulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164029</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of pGaN-gate power HEMTs</title>
<link>https://hdl.handle.net/1721.1/164028</link>
<description>Characterization of pGaN-gate power HEMTs
Yu, Yue
This thesis presents a comprehensive study of p-GaN gate GaN High Electron Mobility Transistors (HEMTs) with a focus on understanding how fabrication process variations and gate structural designs impact key electrical performance metrics. Five industry-fabricated wafers, each processed with distinct etch depths, contact strategies, and p-GaN surface configurations, were characterized using a combination of DC and pulsed I–V measurements. Full-transistor modules were evaluated alongside specialized test structures to enable both system-level and localized analysis. DC measurements using the Keysight B1505A system revealed that more aggressive gate contact schemes improved ON-resistance and transconductance, but often at the cost of increased gate leakage and reduced threshold control. Pulsed-IV characterization with the Auriga AU4750 system uncovered dynamic Ron degradation behavior and charge trapping effects, especially under high drain bias conditions. Extracted time constants demonstrated process-dependent trends, with wafers retaining more of the p-GaN surface exhibiting slower charge detrapping and more severe transient effects. Specialized test structures provided additional insights into gate lateral conduction, sheet resistance, and contact asymmetry, reinforcing the connection between device layout, processing, and observed variability. These findings highlight critical trade-offs in the design and fabrication of p-GaN gate GaN HEMTs and offer design-aware strategies for optimizing performance and reliability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164028</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contactless Sleep and Physiological Monitoring Using Artificial Intelligence and Radio Waves</title>
<link>https://hdl.handle.net/1721.1/164027</link>
<description>Contactless Sleep and Physiological Monitoring Using Artificial Intelligence and Radio Waves
He, Hao
Remote monitoring of sleep and physiological signals is critical for tracking human health, managing diseases, and enabling early intervention. However, existing monitoring solutions face two major limitations: (1) they are often unsuitable for vulnerable populations—such as infants and seniors—and (2) most of them raise concerns about measurement accuracy. We propose a novel, contactless approach that addresses both challenges by combining advances in artificial intelligence (AI) and radio-frequency (RF) sensing. Our solution makes monitoring more comfortable, accessible, and affordable, while still delivering clinically meaningful insights. This thesis makes four fundamental contributions: First, we introduce a system that can extract high-fidelity breathing signals from ambient RF reflections, even in complex scenarios where multiple individuals are present, such as couples sharing a bed. Second, we develop an AI-based sleep monitoring framework that generates sleep hypnograms and detects respiratory events entirely without the need for on-body sensors. Third we develop AI models that infer critical biomarkers—such as blood oxygen saturation (SpO₂) and inflammation (C-reactive protein levels)—in a fully passive and non-intrusive manner. Finally, inspired by the success of large language models, we show that physiological signals can be represented and interpreted analogously to language. This insight enables effective translation between modalities (e.g., from respiration to EEG) and unlocks robust representation learning for downstream clinical tasks. Together, these contributions establish a new paradigm for remote sleep and physiological monitoring—one that is contactless, continuous, and passive. We validate our system on real world datasets and demonstrate its potential to fundamentally transform clinical care and home health monitoring.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164027</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Crystallization of Glauber's salt</title>
<link>https://hdl.handle.net/1721.1/164009</link>
<description>Crystallization of Glauber's salt
Coberly, C. Wheeler.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 39).
</description>
<pubDate>Wed, 01 Jan 1936 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164009</guid>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of torquemeters for high speed shafts</title>
<link>https://hdl.handle.net/1721.1/164008</link>
<description>Investigation of torquemeters for high speed shafts
Saluja, Narinder S.
            (Narinder Singh)
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1959; Includes bibliographical references (leaves 64-67).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164008</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The resonant-frequency shift of a microwave cavity caused by the high-density plasma in semiconductors, as a function of magnetic field</title>
<link>https://hdl.handle.net/1721.1/164007</link>
<description>The resonant-frequency shift of a microwave cavity caused by the high-density plasma in semiconductors, as a function of magnetic field
Weber, Robert.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Physics, 1959; Includes bibliographical references (leaves 46-47).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164007</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of angular scintillation of radar echoes</title>
<link>https://hdl.handle.net/1721.1/164006</link>
<description>Analysis of angular scintillation of radar echoes
Graham, James William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1952
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164006</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid transit use of existing rail lines</title>
<link>https://hdl.handle.net/1721.1/164005</link>
<description>Rapid transit use of existing rail lines
Kenyon, Michael D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1958; Includes bibliographical references (leaf 25).
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164005</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mass transfer from rotating cylinders</title>
<link>https://hdl.handle.net/1721.1/164004</link>
<description>Mass transfer from rotating cylinders
Cotter, John.; Schmidt, Guy L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1956; Bibliography: leaf 38.
</description>
<pubDate>Sun, 01 Jan 1956 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164004</guid>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An observation about the Chicago Council and its policies</title>
<link>https://hdl.handle.net/1721.1/164003</link>
<description>An observation about the Chicago Council and its policies
Naber, Fred P.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1948
</description>
<pubDate>Thu, 01 Jan 1948 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164003</guid>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design and construction of an ultra-high vacuum field-ion microscope.</title>
<link>https://hdl.handle.net/1721.1/164002</link>
<description>The design and construction of an ultra-high vacuum field-ion microscope.
Olson, Gregory Bruce.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Bibliography: leaf 35.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164002</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>South End Center for the Arts.</title>
<link>https://hdl.handle.net/1721.1/164001</link>
<description>South End Center for the Arts.
Dunbar, Gary Arthur.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1965; "Special requirements for group A occupancy: theatres" leaves [34-42] inserted. "Special requirements for group C occupancy: schools" leaves [50-54] inserted.; Bibliography: leaf 20.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164001</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of braced excavations.</title>
<link>https://hdl.handle.net/1721.1/164000</link>
<description>Analysis of braced excavations.
Wong, Ing Hieng.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1971; Three leaves on transparent sheets. Vita.; Bibliography: leaves 95-99.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164000</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shipleasing as a prospective method of l/t financing for international shipowners.</title>
<link>https://hdl.handle.net/1721.1/163999</link>
<description>Shipleasing as a prospective method of l/t financing for international shipowners.
Angelicoussis, John Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1974; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163999</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling rail freight management.</title>
<link>https://hdl.handle.net/1721.1/163998</link>
<description>Modelling rail freight management.
Assad, A.
            (Arjang)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1978; Vita.; Bibliography: leaves 277-292.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163998</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates</title>
<link>https://hdl.handle.net/1721.1/163997</link>
<description>The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates
Lehman, LeNore Louise.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1988; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163997</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information</title>
<link>https://hdl.handle.net/1721.1/163996</link>
<description>Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information
Huttenlocher, Daniel P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Bibliography: leaves 73-77.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163996</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metabolism in vivo of 1, 3-butanediol in the rat</title>
<link>https://hdl.handle.net/1721.1/163995</link>
<description>Metabolism in vivo of 1, 3-butanediol in the rat
Nahapetian, Aratoonnaz,
            author.
The metabolism of 1, 3-butanediol (BD) was investigated in vitamin B 12 -deficient and normal rats and in liver slice and diaphragm systems. Body weight gain and feed efficiency were determined in rats fed ad libitum for five weeks on a basal 5% BD or 5% sodium propionate diet with and without vitamin B12. The rats were train-fed for ten months on the same diets. The presence of sodium prop i onate in vitamin B12-deficient basal diets resulted in reduced food intake while BD had the opposite effect. As a result, vitamin B12-deficient rats fed a 5% sodium propionate diet grew less than those fed a 5% BD diet. The metabolism in vivo of BD labeled in carbon-1 (BD-l-cl4) and carbon-4 (BD-4-cl4) were compared to the metabolism of propionate-l-cl4 (PRP-l-cl4) in vitamin B12-deficient and normal rats. Vitamin B12 deficiency reduced the oxidation of sodium propionate but not that of BD, and had no effect on glycogen labeling from BD-l-cl4 and BD-4-cl4. For PRP-l-cl4 however, vitamin B12 deficiency resulted in not only no incorporation of label but liver glycogen levels were very small. On the other hand , when vitamin B12 was present in the diet, the labeling of glycogen from propionate was higher than that from either of the BD-labeled test compounds. Methylmalonic aciduria and urinary loss of ingested activity was higher in vitamin B12-deficient rats fed PRP-l-cl4 than in those fed l abe l ed BD. Nearly all of the urinary activity of vitamin B 1 2-deficient rats fed PRP-l-cl4 was in the form of me t hy l malonic acid (MMA), while little, if any, of the activity was found in the MMA fraction of urine of vita m in B12-deficient rats fed labeled BD. The metabolism in vivo of BD-c14 and BD-3-c14 was investigated in normal rats. About eighty percent of BD was oxidized to carbon dioxide within 32 hours. Its oxidation in the first eight hours was higher when BD was administered intraperitoneally than when it was fed by stomach tube. The loss of ingested activity in the urine expressed as a percentage of total intake and 1,3-BD was higher at the higher doses of BD. However, the activity in urinary BD could not account for all the activity in the urine. A considerable amount of ketone bodies was detected in urine of rats after feeding BD while no detectable ketone bodies were found in the urine of control rats. In addition, relative specific activities of urinary BD and S -hydroxybutyrate were 0.91 and 0.50 respectively. Polarimetry of both purified urinary BD and S -hydroxybutyrate showed that the percentages of (+)- and (-)-isomers of both compounds were 40 and 60% respectively. The metabolism in vitro of BD-3-c14 and DL-S - hydroxybutyrate-4-cl4 were investigated in systems which contained liver slices alone, diaphragm alone or both liver slices plus diaphragm. The oxidation rate of S -hydroxybut y rate was lower in liver slices than in either the diaphragm or the liver slices plus diaphragm systems. Moreover, the rate of oxidation of S -hydrox y- butyrate was highest in the system which included both liver slices and diaphragm. On the other hand, the oxidation rate of BD was lower in the system which had only diaphragm than in the other two systems. However, the rate of BD oxidation was highest in the system - wh ich included both liver slices and diaphragm. Presence of BD gave rise to increased D-(-)- S -hydroxybutyrate and acetoacetate in systems which contained liver slices or liver slices plus diaphragm. In addition, the production rate of D-(-)- S - hydroxybutyric acid was higher than that of acetoacetate in the pre sence of BD, while the opposite was true in its absence. Finally, all the radioactivity in the control incubation media was accounted for by BD-3-cl4, while about 1.5 and 98.5 percent of incubation media activity were recovered in S -hydroxybutyrate and BD peaks, respectively, in incub at ion systems containing liver. The results of this study indicate that 1,3-BD and sodium propionate do not share a common metabolic pathway in the rat. The data suggest, however, that nahapetian-4-1, 3-BD is most probably oxidized to S-hydroxybutyric acid using a "1,3-butanediol dehydrogenase" that is higher in activity in the liver than in the diaphragm. M oreover the (+)-isomer of BD is oxidized at a faster rate than the (-)-isomer, suggesting that the two isomers are oxidized by two different pathways.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1971; Thesis supervised by Sanford A. Miller Vita: page 196; Includes bibliographical references (pages 182-196)
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163995</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies</title>
<link>https://hdl.handle.net/1721.1/163994</link>
<description>An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies
Perkins, Edwin H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1930; Includes bibliographical references (leaf 115).
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163994</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Micro-analysis of grinding machine cuttings</title>
<link>https://hdl.handle.net/1721.1/163993</link>
<description>Micro-analysis of grinding machine cuttings
Zurlo, J. V.; Terkelsen, E. A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1922
</description>
<pubDate>Sun, 01 Jan 1922 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163993</guid>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tests upon bamboo as a concrete reinforcement and a consideration of its application in construction</title>
<link>https://hdl.handle.net/1721.1/163992</link>
<description>Tests upon bamboo as a concrete reinforcement and a consideration of its application in construction
Young, Joe W.; Guo, Dianbang.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1924
</description>
<pubDate>Tue, 01 Jan 1924 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163992</guid>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A precision method for the determination of dew points of complex gaseous systems</title>
<link>https://hdl.handle.net/1721.1/163991</link>
<description>A precision method for the determination of dew points of complex gaseous systems
Cox, John Tatum.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 43).
</description>
<pubDate>Wed, 01 Jan 1936 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163991</guid>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>AbsInt-AI: Language Models for Abstract Interpretation</title>
<link>https://hdl.handle.net/1721.1/163731</link>
<description>AbsInt-AI: Language Models for Abstract Interpretation
Wang, Michael
Static program analysis is a foundational technique in software engineering for reasoning about program behavior. Traditional static analysis algorithms model programs as logical systems with well-defined semantics, enabling strong guarantees such as never missing a bug. However, traditional analyses almost always rely on uniform, hard-coded heap abstractions. While more adaptive abstractions are possible in theory, they are rarely implemented in practice due to their complexity and fragility. This limits their precision and flexibility, especially in dynamic languages like JavaScript, where heap structures are heterogeneous and difficult to analyze statically. In this work, we introduce AbsInt-AI, a language-model-guided static analysis framework based on abstract interpretation with adaptive, per-object heap abstractions for JavaScript. This enables the analysis to leverage high-level cues, such as naming conventions and access patterns, without requiring brittle, hand-engineered heuristics. Importantly, the LM agent operates within a bounded interface and never directly manipulates program state, preserving the soundness guarantees of abstract interpretation. ABSINT-AI reduces false positives by up to 34% for bug detection compared to traditional static analysis while maintaining soundness. Our ablations show that the LM’s interactions with the analysis environment are crucial, outperforming non-agentic direct LM predictions by 25%.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163731</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift</title>
<link>https://hdl.handle.net/1721.1/163730</link>
<description>Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift
Sharma, Harsha
Video-streaming platforms tune dozens of playback parameters across thousands of client devices. Our measurements from Prime Video show that device-specific tuning can enhance stream quality. Yet traditional blackbox optimization methods like Bayesian optimization become prohibitively expensive due to the large configuration space and the constant emergence of new device types. We introduce AZEEM, a scalable recommendation system leveraging few-shot prediction to rapidly identify promising configurations for new devices. The key insight behind AZEEM is that devices exhibit performance similarities that enable predictions from limited observations. Trained on offline data of device-playback configuration interactions, AZEEM efficiently narrows down the search space to a small set of configurations likely to contain optimal or near-optimal candidates. Additionally, AZEEM addresses temporal distribution shift—where the best-performing configurations change over time—by recommending a small, robust set of candidates rather than a single configuration. Evaluations using largescale real-world datasets show that AZEEM reduces exploration cost by 5.8 − 13.6× and improves stream quality compared to state-of-the-art Bayesian optimization and multi-armed bandit approaches, enabling effective device-specific optimization at scale. The material in this thesis is primarily sourced from the paper "Predict, Prune, Play: Efficient Video Playback Optimization Under Device Diversity and Drift" authored by Harsha Sharma, Pouya Hamadanian, Arash Nasr-Esfahany, Zahaib Akhtar, Mohammad Alizadeh, which is currently under submission.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163730</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks</title>
<link>https://hdl.handle.net/1721.1/163729</link>
<description>Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks
Song, Shixin
Address Space Layout Randomization (ASLR) is one of the most prominently deployed mitigations against memory corruption attacks. ASLR randomly shuffles program virtual addresses to prevent attackers from knowing the location of program contents in memory. Microarchitectural side channels have been shown to defeat ASLR through various hardware mechanisms. We systematically analyze existing microarchitectural attacks and identify multiple leakage paths. Given the vast attack surface exposed by ASLR, it is challenging to effectively prevent leaking the ASLR secret against microarchitectural attacks. Motivated by this, we present Oreo, a software-hardware co-design mitigation that strengthens ASLR against these attacks. Oreo uses a new memory mapping interface to remove secret randomized bits in virtual addresses before translating them to their corresponding physical addresses. This extra step hides randomized virtual addresses from microarchitecture structures, preventing side channels from leaking ASLR secrets. Oreo is transparent to user programs and incurs low overhead. We prototyped and evaluated our design on Linux using the hardware simulator gem5.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163729</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Counting Substructures with Graph Neural Networks</title>
<link>https://hdl.handle.net/1721.1/163728</link>
<description>On Counting Substructures with Graph Neural Networks
Tahmasebi, Behrooz
To achieve a graph representation, most Graph Neural Networks (GNNs) follow two steps: first, each graph is decomposed into a number of subgraphs (which we call the recursion step), and then the collection of subgraphs is encoded by several iterative pooling steps. While recently proposed higher-order networks show a remarkable increase in the expressive power through a single recursion on larger neighborhoods followed by iterative pooling, the power of deeper recursion in GNNs without any iterative pooling is still not fully understood. To make it concrete, we consider a pure recursion-based GNN which we call Recursive Neighborhood Pooling GNN (RNPGNN). The expressive power of an RNP-GNN and its computational cost quantifies the power of (pure) recursion for a graph representation network. We quantify the power by means of counting substructures, which is one main limitation of the Message Passing graph Neural Networks (MPNNs), and show how RNP-GNN can exploit the sparsity of the underlying graph to achieve low-cost powerful representations. We also compare the recent lower bounds on the time complexity and show how recursion-based networks are near optimal.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163728</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand</title>
<link>https://hdl.handle.net/1721.1/163727</link>
<description>Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand
Norton, Wil J.
In robot hands, compliance improves the quality of grasps and allows for robustness in contact with the environment, which is why soft robot hands, which are inherently compliant, generate such interest despite being complex to control and model. In prior work, our lab developed a soft-rigid hybrid architecture for a robot finger, with the intention of making a compliant finger that is as easy to control as a rigid robot. This thesis details the work done to take this architecture and develop it into a five-fingered dexterous gripper capable of highly compliant grasping — over several iterations, we create an integrated tendon-driven hand that is robust, maintainable, and inexpensive. We develop a precise controller for the soft-rigid hybrid finger, and extend it for both position and task space control of the hand — additionally we implement variable stiffness control within the controller without the need for additional hardware, via adjusting gain values in the control loop. We test the ability of the hand to complete the full set of human grasping postures, and demonstrate that the soft-rigid architecture enables a high degree of generalization, able to complete 28 of the 33 identified human grasp postures. Additionally, tests illustrate the hand’s advantages in completing traditionally difficult manipulation tasks such as picking up thin deformable objects (such as a dollar bill or folding cloth) as well as in interfacing with soft or delicate target objects. We adapt a teleoperation system to map the movements of the robot gripper to a glove worn by a human operator, and evaluate the usability of the hand as a teleoperation target for completing several tasks — we illustrate promising results that the compliance of the hand compensates for operator error and allows for fast completion of tasks requiring environmental or object contact, traditionally difficult tasks for existing rigid robots. Finally, we discuss the use of the teleoperation system to record demonstrations which we then use to train an imitation learning model, utilizing an implementation of denoising diffusion probabilistic models, to complete grasping tasks. We show that our soft-rigid fingers allow a dexterous hand to be trained to perform autonomous grasping with a relatively small set of expert demonstrations, and that the compliance of the physical structure allows for variance in the environment and object position to be compensated for by the physical properties of the hand.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163727</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus</title>
<link>https://hdl.handle.net/1721.1/163726</link>
<description>Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus
Qu, Ashley
Barrett’s Esophagus (BE) is a key precursor to esophageal adenocarcinoma (EAC), but current screening and risk assessment methods are ineffective and costly. Many BE cases remain undiagnosed due to asymptomatic patients, and existing risk algorithms rely on patient data rather than biomarkers. This work aims to start building a risk progression model by using a multi-modal imaging system combining autofluorescence spectroscopy, optical coherence tomography, and diffuse reflectance spectroscopy to perform label-free optical biopsies on ex-vivo tissue. These images will be co-registered and validated with histological biomarkers for BE. The ultimate goal is to develop a non-invasive endoscopic capsule and algorithm to better assess BE progression and enhance early detection of EAC.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163726</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity of Basis-Restricted Local Hamiltonians</title>
<link>https://hdl.handle.net/1721.1/163725</link>
<description>Complexity of Basis-Restricted Local Hamiltonians
Ma, Henry
A major goal of quantum complexity theory is to understand which computational problems can be solved with access to certain quantum resources. The subfield of Hamiltonian complexity specifically considers computational problems that ask about properties of local Hamiltonians, which are of critical importance in quantum complexity because they can be viewed as quantum generalizations of classical constraint satisfaction problems. In this work, we study the complexity of certain restricted variants of the Quantum-k-Sat problem, a quantum analog of the NP-complete k-Sat problem. We introduce new variants of Quantum-k-Sat which place a basis restriction on the input Hamiltonian H = Σᵢ hᵢ . Each variant is defined by a fixed collection of bases B₁, . . . , Bᵣ of n-qubit space. We require that each Hamiltonian term hi must be diagonal in one of these bases. Our results resolve the complexity of certaim basis-restricted variants of Quantum-k-Sat. First we show the Quantum-6-Sat problem with Hamiltonian terms restricted to be diagonal in an X/Z mixed basis is QMA₁-complete. Second, we combine basis restriction with the restriction of commutativity, and show the following easiness result, which applies generally to higher-level quantum systems (qudits) and bases Q and R (which are real-valued and satisfy an overlap condition): The commmuting Quantum-Sat problem on qudits, where Hamiltonian terms are either diagonal in the Q basis, the R basis, or a single mixed Q/R basis, is in NP.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163725</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Future of Personalized, Aligned Language Models</title>
<link>https://hdl.handle.net/1721.1/163724</link>
<description>Future of Personalized, Aligned Language Models
Han, Seungwook
Aligning Large Language Models (LLMs) to cater to different human preferences, learning new skills, and unlearning harmful behavior is an important problem. Search-based methods, such as Best-of-N or Monte-Carlo Tree Search, are effective, but impractical for LLM adaptation due to their high inference cost. On the other hand, using Reinforcement Learning (RL) for adaptation is computationally efficient, but performs worse due to the optimization challenges in co-training the value function and the policy. We present a new framework for reward optimization, Value Augmented Sampling (VAS), that can maximize different reward functions using data sampled from only the initial, frozen LLM. VAS solves for the optimal reward-maximizing policy without co-training the policy and the value function, making the optimization stable, outperforming established baselines, such as PPO and DPO, on standard benchmarks, and achieving comparable results to Best-of-128 with lower inference cost. Unlike existing RL methods that require changing the weights of the LLM, VAS does not require access to the weights of the pre-trained LLM. Thus, it can even adapt LLMs (e.g., ChatGPT), which are available only as APIs. In addition, our algorithm unlocks the new capability of composing several rewards and controlling the extent of each one during deployment time. By bringing together stability, flexibility, and efficiency, we explore the future of aligned, personalized language models that can be adapted seamlessly to meet a wide spectrum of human preferences.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163724</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock</title>
<link>https://hdl.handle.net/1721.1/163723</link>
<description>Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock
Ji, Yewon
Seoul, South Korea, exhibits an exceptionally rapid residential demolition-reconstruction cycle of approximately 30 - 40 years, resulting in one of the world’s shortest apartment building lifespans. This entrenched status quo, fueled by post-war policies, real estate speculation, and finance models treating housing primarily as a short-term asset, contrasts sharply with other developed nations. This research critiques South Korea’s model of rapid demolition for its significant, often overlooked, environmental impacts and social costs. To evaluate alternatives, the methodology comprises three key stages: A) a comparative analysis of the financial frameworks and sustainability outcomes characterizing Western residential longevity versus the unique Korean housing model; B) the formulation of a novel alternative practice focused on adaptive reuse and retrofitting, specifically tailored to integrate within South Korea’s economic system and cultural context; and C) the practical demonstration and assessment of this practice through a design case study, incorporating strategies like phased interventions and low-carbon materials such as mass timber. The analysis reveals that this alternative extends building lifespan and achieves substantial carbon reductions by preserving the embodied carbon within existing structures. It offers long-term financial benefits, presenting a viable economic pathway aligning key stakeholder interests through enduring value over speculative gains.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163723</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach</title>
<link>https://hdl.handle.net/1721.1/163722</link>
<description>Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach
Noorbakhsh, Kimia
Assessing and enhancing human learning through question-answering is vital, especially when dealing with large documents, yet automating this process remains challenging. While large language models (LLMs) excel at summarization and answering queries, their ability to generate meaningful questions from lengthy texts remains underexplored. We propose Savaal, a scalable question-generation system with three objectives: (i) scalability, enabling question-generation from hundreds of pages of text (ii) depth of understanding, producing questions beyond factual recall to test conceptual reasoning, and (iii) domainindependence, automatically generating questions across diverse knowledge areas. Instead of providing an LLM with large documents as context, Savaal improves results with a threestage processing pipeline. Our evaluation with 76 human experts on 71 papers and PhD dissertations shows that Savaal generates questions that better test depth of understanding by 6.5× for dissertations and 1.5× for papers compared to a direct-prompting LLM baseline. Notably, as document length increases, Savaal’s advantages in higher question quality and lower cost become more pronounced.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163722</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New approaches to diagnostic imaging: Magnetic particle&#13;
imaging for human functional neuroimaging and short&#13;
mid-field MRI magnet design</title>
<link>https://hdl.handle.net/1721.1/163721</link>
<description>New approaches to diagnostic imaging: Magnetic particle&#13;
imaging for human functional neuroimaging and short&#13;
mid-field MRI magnet design
Barksdale, Alex Christopher
Part I: Magnetic Particle Imaging for Human Functional Neuroimaging While Magnetic Resonance Imaging (MRI) has revolutionized diagnostic imaging since its clinical introduction in the 1980s — primarily focusing on hydrogen nuclei — it remains fundamentally limited by the weak nature of nuclear spin magnetism. For example, functional MRI (fMRI) provides valuable insights into brain activity through BOLD signaling, but its limited sensitivity and reliance on indirect physiological measures often necessitate large subject pools for meaningful analysis. In contrast, Magnetic Particle Imaging (MPI) utilizes the much stronger magnetism associated with superparamagnetic iron oxide nanoparticles (SPIONs), and by minimizing background signal levels which are not modulated by functional activity, it offers a promising alternative. However, there are no approved SPION tracers for human use that are well-suited to MPI, and we have little experience scaling this technology up to human-sized imagers. This thesis therefore demonstrates a human-scale MPI scanner using functional MPI (fMPI) in non-human primates and assesses its potential for future human studies. Additionally, we investigate safety aspects of MPI, specifically focusing on peripheral nerve stimulation (PNS) induced by the 25 kHz magnetic excitation fields used in MPI. Because this is a higher frequency than those used by MRI gradients, threshold data at this frequency are lacking. This thesis measures the PNS stimulation threshold in human subjects to better understand high-frequency magnetic PNS and ensure the safe implementation of human-scale MPI for future neuroimaging applications. Part II: Short Mid-Field MRI Magnet Designs Anxiety induced by the long, narrow tube of conventional 1.5T and 3T scanners is a common cause of incomplete patient examinations, leading to delays in diagnosis and reduced facility throughput. In contrast, the short aspect ratio of CT scanner bores is known to alleviate this anxiety, eliminating this problem. This thesis also addresses the need for a more patient-friendly MRI scanning option by introducing a new “hybrid” superconducting and permanent magnet concept applicable to mid-field (0.5T) superconducting solenoid magnets. While mid-field scanners offer lower sensitivity than high-field alternatives, recent advances in image reconstruction and denoising have significantly enhanced their utility, allowing them to deliver diagnostic information comparable to that of the previous generation of 1.5T scanners. Additionally, they increase the range of compatible metallic implants and offer hospitals a lower-cost, easier-to-site alternative to 1.5T and 3T scanners. They can also enhance patient comfort through shorter bore lengths and larger diameters, but their optimized winding designs still reach a limit in how short they can be made for a given homogeneity and diameter specification. This thesis introduces the use of rare-earth permanent magnets to enable further reductions in scanner length, aiming to match the aspect ratio of CT scanners.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163721</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ab initio modeling of superconducting nanowire single-photon detectors</title>
<link>https://hdl.handle.net/1721.1/163720</link>
<description>Ab initio modeling of superconducting nanowire single-photon detectors
Simon, Alejandro
Single-photon detectors are widely used in modern communication, sensing, and computing technology. Among these detectors, superconducting nanowire single-photon detectors (SNSPDs) possess the highest detection efficiencies, the shortest timing jitter, and the lowest dark count rates. However, for several applications, including those in the biological, astronomical, and quantum computation fields, there remains a desire to push the capabilities of modern detectors even further. To realize these improvements, it is necessary to develop an understanding of the physical mechanisms underpinning single-photon detection in these devices. However, current models are phenomenological, requiring experimental data for input, or can only recover qualitative agreement, severely limiting their predictive ability. In this thesis, we begin by describing the existing theoretical frameworks used to model superconducting materials and devices, both in equilibrium and nonequilibrium. We then illustrate an example of a phenomenological approach to modeling superconducting devices by developing an electrothermal model for the superconducting nanowire cryotron and demonstrating its efficacy in predicting the DC behavior and power dissipation of the device. Finally, we expand upon the current state-of-the-art SNSPD theory by utilizing recent advances in density functional theory to develop an ab initio model for the photon detection mechanism of SNSPDs. We then validate the predictions of our model with experimental data from the literature. The resulting model requires no experimental input, provides quantitative predictions of SNSPD performance, and can be extended to describe other superconducting devices, thus enabling the possibility of conducting a systematic search of materials for enhanced device performance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163720</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference-Time Learning Algorithms of Language Models</title>
<link>https://hdl.handle.net/1721.1/163719</link>
<description>Inference-Time Learning Algorithms of Language Models
Akyurek, Ekin
Modern language models (LMs) can perform complex tasks through in-context learning (ICL)—they can adapt to a task via examples provided in their input without any parameter updates. However, fundamental questions remain about when this adaptation works, what algorithms underlie it, and how to improve it. This thesis studies the mechanisms and limitations of ICL and develops better methods for test time adaptation of LMs on diverse benchmarks of language modeling and reasoning. I begin by evaluating the ICL capabilities of pre-trained LMs. I demonstrate that LMs can achieve strong compositional generalization when provided with few-shot examples. In a separate analysis, I show that their performance deteriorates significantly when faced with counterfactual variants of tasks they normally performed well on. Later, I develop "model problems" of ICL test the ability of LMs to learn novel mathematical structures in-context like linear functions and probabilistic formal languages. I interpret the algorithmic foundations of ICL. First, I prove that Transformer models with sufficient capacity can execute both iterative and closed-form solutions to linear regression problems, and demonstrate that these theoretical solutions manifest as interpretable intermediate variables. Then, I reveal how LMs develop specialized circuits that implement approximate n-gram learning algorithms for probabilistic languages. Building on these insights, I develop two approaches to enhance LMs. First, I demonstrate that explicitly incorporating n-gram computation into model architectures improves performance across multiple domains. Second, I introduce a test-time training that enables rapid adaptation through gradient updates on input data, achieving significant improvements over standard few-shot learning on abstract reasoning tasks. Together, these results advance our understanding of how LMs adapt to novel tasks and provide practical techniques for enhancing their test-time learning capabilities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163719</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock</title>
<link>https://hdl.handle.net/1721.1/163718</link>
<description>Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock
Velez, Gustavo A.
Optical lattice clocks require careful preparation of atomic ensembles in order to ensure homogeneous interactions with the clock laser. We demonstrate loading and laser cooling of an ensemble of ytterbium-171 atoms in a 2D optical dipole trap created by an optical cavity. Our loading method ensures that all atoms are located in the intersection of 2 perpendicular dipole traps as verified through absorption imaging. Raman sideband cooling was used to cool the atomic ensemble from 15.7 uK to 6.3 uK as measured through optical sideband spectroscopy on the 578 nm clock transition. Together, these steps improved the transfer of atoms during a Rabi oscillation from the ground to the clock state from approximately 45 percent excitation fraction to 80 percent excitation fraction. The final atomic ensemble preparation is now sufficient for running an atomic clock.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163718</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving and Analyzing Model Merging Methods for Adaptation</title>
<link>https://hdl.handle.net/1721.1/163717</link>
<description>Improving and Analyzing Model Merging Methods for Adaptation
Pari, Jyothish
In this work, we explore the limitations of combining models by averaging intermediate features, referred to as model merging, and propose a new direction for achieving collective model intelligence through what we call compatible specialization. Current methods for model merging, such as parameter and feature averaging, struggle to effectively combine specialized models due to representational divergence during fine-tuning. As models specialize to their individual domains, their internal feature representations become increasingly incompatible, leading to poor performance when attempting to merge them for new tasks. We analyze this phenomenon using centered kernel alignment (CKA) and show that as models specialize, the similarity in their feature space structure diminishes, hindering their capacity for collective use. To address these challenges, we investigate routing-based merging strategies, which offer more flexible methods for combining specialized models by dynamically routing across different layers. This allows us to improve on existing methods by combining features from multiple layers rather than relying on fixed, layer-wise combinations. However, we find that these approaches still face limitations when layers within models are representationally incompatible. Our findings highlight the importance of designing new approaches for model merging that operate on well-defined input and output spaces, similar to how humans communicate through language rather than intermediate neural activations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163717</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications</title>
<link>https://hdl.handle.net/1721.1/163716</link>
<description>Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications
Pan, Eileen
LLMs already permeate medical settings, supporting patient messaging, medical scribing, and chatbots. While prior work has examined bias in medical LLMs, few studies focus on realistic use cases or analyze the source of the bias. To assess whether medical LLMs exhibit differential performance by gender, we audit their responses and investigate whether the disparities stem from implicit or explicit gender cues. We conduct a large-scale human evaluation of GPT-4 responses to medical questions, including counterfactual gender pairs for each question. Our findings reveal differential treatment based on the original patient gender. Specifically, responses for women more often recommend supportive resources, while those for men advise emergency care. Additionally, LLMs tend to downplay medical urgency for female patients and escalate it for male patients. Given rising interest in “LLM-as-a-judge” approaches, we also evaluate whether LLMs can serve as a proxy for human annotators in identifying disparities. We find that LLM-generated annotations diverge from human assessments in heterogeneous ways, particularly regarding error detection and relative urgency.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163716</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications</title>
<link>https://hdl.handle.net/1721.1/163715</link>
<description>Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications
López Ángeles, Christian Emmanuel
Two-dimensional materials, such as graphene, hold promise for sensing applications. Graphene's remarkable surface-to-volume ratio, when employed as a transducer, enables the sensor channel to be readily modulated in response to chemical changes in proximity to its surface, effectively converting chemical signals into the electrical domain. However, their utilization has been constrained due to variations in device-to-device performance arising from synthesis and fabrication processes. To address this challenge, we employ Graphene Field Effect Transistors (GFETs) in developing a robust and multiplexed chemical sensing platform. This platform comprises a silicon chip with multiple arrays of sensing units distributed on its surface. This chip is coupled with custom-designed high-speed readout electronics for structural monitoring applications. For example, in harsh environmental conditions, structures constructed from reinforced concrete may experience degradation due to corrosion, a chemical process initiated by carbonation from atmospheric CO₂ and significant fluctuations in temperature and humidity. Under normal conditions, concrete maintains a pH level within the alkaline range of 13 to 14. However, when subjected to carbonation, its pH decreases to values between 8 and 9. Our platform excels in real-time pH monitoring. By conducting I-V sweep measurements in the sensor channel, we have established a correlation between [H⁺] concentration and the device transfer characteristics, i.e. gate-source voltage (&#119881;_&#119866;&#119878;) at graphene's Dirac point with an accuracy of roughly 97%. Additionally, we evaluate changes in graphene channel resistance induced by pH variations. This system and correlation allow for the prompt detection of any deviations induced by corrosion within a concrete environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163715</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards More Interpretable AI With Sparse Autoencoders</title>
<link>https://hdl.handle.net/1721.1/163714</link>
<description>Towards More Interpretable AI With Sparse Autoencoders
Engels, Joshua
While large language models demonstrate remarkable capabilities across diverse domains, the specific representations and algorithms they learn remain largely unknown. The quest to understand these mechanisms holds dual significance: scientifically, it represents a fundamental inquiry into the principles underlying intelligence, while practically–and with growing urgency– it is vital for mitigating risks from these very same increasingly powerful systems. The initial section of this thesis tackles this challenge of interpreting internal language model representations (features) by employing sparse autoencoders (SAEs). An SAE decomposes neural network hidden states into a potentially more interpretable basis. In Chapter 2, we introduce an unsupervised, SAE-based methodology that successfully identifies inherently multi-dimensional features. Notably, we establish that language models causally represent concepts such as days of the week and months of the year using circular structures. This work provided the first definitive evidence of causal, multi-dimensional features, thereby refuting the one-dimensional linear representation hypothesis. Chapter 3 further assesses whether SAEs identify “true” atomic language model features. We compare the generalization performance and data efficiency of linear probes trained on SAE latents against those trained on the original hidden state basis. The negative outcomes of these experiments suggest limitations in SAEs for capturing the true ontology of language models. Motivated by the aforementioned limitations, the second part of this thesis investigates sparse autoencoders themselves, exploring potential improvements and characterizing their failure modes. Chapter 4 examines the portion of activations not reconstructed by SAEs, which we term “Dark Matter.” We find that a significant fraction of this dark matter is linearly predictable, and furthermore, that specific tokens poorly reconstructed by SAEs remain largely consistent across SAE sizes and sparsities. This suggests that SAEs may systematically fail to capture certain input subspaces, which we hypothesize to contain inherently dense features. Subsequently, Chapter 5 investigates a method to enhance SAE utility: freezing the learned SAE parameters and finetuning the surrounding language model components to minimize KL divergence with the original model’s output distribution. This technique results in a 30% to 55% decrease in the cross-entropy loss gap incurred by inserting the SAE into the model.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163714</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems</title>
<link>https://hdl.handle.net/1721.1/163713</link>
<description>Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems
Lawson, Riley E.
In the analysis and operation of electric power systems, understanding the rates at which dynamic phenomena evolve is critical. Classically, power systems operate on multiple time scales, with slower mechanical dynamics from synchronous machines, faster electromechanical controls and protection, and very fast electrical dynamics from transmission networks. This time scale separation results in system modeling techniques which neglect certain component dynamics. However, in systems with significant penetration of power electronic devices and under fast time scale phenomena, the rates at which dynamics evolve become less separated, necessitating the modeling of all system dynamics. In large-scale systems, this becomes computationally challenging due to the high dimensionality of the interconnected system model. This work investigates the role transmission line dynamics play at very fast time scales in power systems. Theoretical results are presented to analyze which transmission line dynamics contribute significantly to power system dynamics, allowing for the intelligent incorporation of transmission line dynamics into computationally tractable models. For the first time, the use of control co-design techniques are demonstrated algorithmically to design fast power electronics-enabled control to stabilize unstable dynamics in electric power systems. This technique allows the design of controls, in an iterative way, to create stable interconnected systems. Finally, transmission line modeling impacts on the design of protection on fast time scales is analyzed. This work presents techniques to protect from short circuits in response to load disconnections, and introduces DC circuit breaker configurations to cause current commutation. In the modern day, power systems operators possess the technology to implement fast control of dynamics, however, due to insufficient information on how to model and prepare for them, system operators instead rely on using conventional, overly conservative control schemes. This work aims to bridge this gap by presenting methodologies to incorporate these dynamics into next-generation system models, and how to design control and protection to mitigate the risks these fast dynamics pose.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163713</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundation Models for Protein Phenotype Prediction</title>
<link>https://hdl.handle.net/1721.1/163712</link>
<description>Foundation Models for Protein Phenotype Prediction
Calef, Robert
Understanding the roles of human proteins remains a major challenge, with approximately 20% of human proteins lacking known functions and more than 40% missing context-specific functional insights. Even well-annotated proteins are often poorly characterized in diverse biological contexts, disease states, and perturbations. We present ProCyon, a foundation model for modeling, generating, and predicting protein phenotypes across five interrelated knowledge domains: molecular functions, therapeutic mechanisms, disease associations, functional protein domains, and molecular interactions. To support this, we created ProCyon-Instruct, a dataset of 33 million protein phenotype instructions, representing a comprehensive resource for multiscale protein phenotypes. By co-training a large language model with multimodal molecular encoders, ProCyon integrates phenotypic and protein data. A novel architecture and instruction tuning strategy allow ProCyon to process arbitrarily interleaved proteinand-phenotype inputs, achieve zero-shot task transfer, and generate free-form text phenotypes interleaved with retrieved protein sequence, structure, and drug modalities in a single unified model.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163712</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functionalization of CNFET arrays for chemical sensing</title>
<link>https://hdl.handle.net/1721.1/163711</link>
<description>Functionalization of CNFET arrays for chemical sensing
Song, Jaekang
Practical deployment of gas sensors for general-purpose applications requires integrated chips that operate at room temperature. However, real-world implementation has been limited by challenges such as the integration of highly sensitive and selective sensors, as well as insufficient statistical validation. In this work, we present an integrated gas sensor array comprising 2048 carbon nanotube field-effect transistors (CNFETs), functionalized with conductive metal-organic frameworks (cMOFs) and metal nanoparticles. Our functionalization approach enhances sensor responses by up to two orders of magnitude and enables on-chip pattern generation. Furthermore, the large number of redundant sensors allows for statistically significant measurements. The improved sensitivity is attributed to increased Schottky barrier modulation. We also demonstrate the chip’s capability to classify bacteria and yeast based on the gas mixtures emitted from cultures grown on agar plates. This work highlights the potential of integrated gas sensors as a practical, rapid, and cost-effective approach for general gas sensing applications, including biomedical applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163711</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for Single Cell RNA-Sequencing Data to Improve Clinical Oncology</title>
<link>https://hdl.handle.net/1721.1/163710</link>
<description>Machine Learning Methods for Single Cell RNA-Sequencing Data to Improve Clinical Oncology
Boiarsky, Rebecca
Single-cell RNA sequencing (scRNA-seq) offers a detailed view of the cellular and phenotypic composition of healthy and diseased tissues. While machine learning (ML) methods are well-suited for the high-dimensional nature of scRNA-seq data, current computational tools face limitations, particularly when confronted with data from clinical oncology. This thesis presents the development and application of ML techniques for scRNA-seq data to address key computational challenges, with a focus on challenges in clinical oncology. It covers four key areas: identifying gene signatures and biomarkers in multiple myeloma, developing methods to account for somatic copy number variations in tumor samples, benchmarking large, pre-trained scRNA-seq foundation models, and creating a framework for predicting clinical outcomes using patient-level representations of single-cell data. Together, these studies aim to develop and evaluate novel ML algorithms for scRNA-seq data which can unlock actionable insights for personalized medicine.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163710</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier</title>
<link>https://hdl.handle.net/1721.1/163709</link>
<description>High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier
Wang, Jennifer
Advancing error-corrected quantum computing and fundamental science necessitates quantum-limited amplifiers with near-ideal quantum efficiency and multiplexing capability. However, existing solutions achieve one at the expense of the other; for example, Josephson traveling wave parametric amplifiers (JTWPAs) are highgain, broadband, and chip-based quantum amplifiers that conventionally incur a bandwidth-noise tradeoff. When operated at 20-dB gain and instantaneous bandwidths of a few GHz, JTWPAs typically reach near-quantum limited intrinsic efficiencies of 70% - 85% relative to that of an ideal phase-preserving quantum amplifier. This is due to information leakage to the sidebands of the JTWPA, which can be recovered by adiabatically transforming the input modes to Floquet modes of the system within the device. In this thesis, we experimentally demonstrate the first Floquet-mode travelingwave parametric amplifier (Floquet TWPA). Fabricated in a superconducting qubit process, this Floquet TWPA achieves minimal dissipation, quantum-limited noise performance, and broadband operation. Our device exhibits &gt; 20-dB amplification over a 3-GHz instantaneous bandwidth, &lt;0.5 -dB average in-band insertion loss, and the highest-reported intrinsic quantum efficiency for a TWPA of 92.1±7.6%, relative to an ideal phase-preserving amplifier. When measuring a superconducting qubit, our Floquet TWPA enables a system measurement efficiency of 65.1 ± 5.8%, the highest-reported in a superconducting qubit readout experiment utilizing phase-preserving amplifiers to the best of our knowledge. Finally, we discuss the noise limitations of our current experimental setup, as well as impedance matching strategies that will enable us to push towards ideal JTWPA performance. These general-purpose Floquet TWPAs are suitable for fast, high-fidelity multiplexed readout in large-scale quantum systems and future monolithic integration with quantum processors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163709</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Scalable Robot Learning without Physical Robots</title>
<link>https://hdl.handle.net/1721.1/163708</link>
<description>Towards Scalable Robot Learning without Physical Robots
Park, Younghyo
The development of generalist robots—capable of performing a wide range of tasks in diverse environments—requires large-scale datasets of robot interactions. Unlike language or vision domains, where data can be passively collected at scale, robotic data collection remains costly, labor-intensive, and constrained by physical hardware. This thesis explores two complementary directions to overcome this challenge. First, we examine the limitations of training robots from scratch using reinforcement learning (RL). While RL has achieved promising results in simulation, its scalability is hindered by a largely overlooked bottleneck: environment shaping. Designing suitable rewards, action and observation spaces, and task dynamics typically requires extensive human intervention. We formalize environment shaping as a critical optimization problem and introduce tools and benchmarks to study and eventually automate this process, a necessary step toward general-purpose RL. Second, we introduce an alternative paradigm for robot data collection that does not rely on real-world robots. Using the Apple Vision Pro, we develop DART, an augmented reality (AR) teleoperation platform that streams human hand motions to cloud-hosted robot simulations. This setup enables scalable, low-latency collection of high-quality robot demonstrations without the overhead of physical setup or maintenance. Our user studies show that DART more than doubles data collection throughput while reducing operator fatigue, and policies trained in simulation using this data successfully transfer to the real world. Together, these contributions address two key bottlenecks in robot learning: the human effort required for RL environment design, and the dependence on physical robots for data. They lay the groundwork for scalable, accessible approaches to training generalist robot models.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163708</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications</title>
<link>https://hdl.handle.net/1721.1/163707</link>
<description>A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications
Golden, Courtney K.
Iterative sparse matrix computations lie at the heart of many scientific computing and graph analytics algorithms. On conventional systems, their irregular memory accesses and low arithmetic intensity create challenging memory bandwidth bottlenecks. To overcome such bottlenecks, distributed-SRAM architectures use tiled arrays of high-bandwidth local storage to achieve very high aggregate memory bandwidth. However, current distributedSRAM architectures suffer from either poor programmability due to over-specialization or poor compute performance due to inefficient general-purpose hardware. This thesis proposes Quartz, a new architecture that uses short dataflow tasks and reconfigurable compute in a distributed-SRAM system to deliver both high performance and high programmability. Unlike traditional sparse CGRAs or on-die reconfigurable engines, Quartz allows reconfigurable compute to be highly utilized and scaled by (1) providing high memory bandwidth to each processing element and (2) introducing a task-level dataflow execution model that fits this new setting. Our execution model dynamically reconfigures tile hardware based on inter-tile messages to execute tasks on local data with fine-grained data partitioning across tiles. To make execution efficient, we explore novel data partitioning techniques that use graph and hypergraph partitioning to minimize network traffic and balance load. This is especially challenging for computations where one operand’s sparsity pattern (i.e., distribution of nonzeros) exhibits dynamic behavior across iterations, and we are the first to provide techniques to address this case. To ensure programmability, we show how a wide range of computations (expressed in an extended version of tensor algebra’s Einsum notation) and flexible data distributions can be systematically captured in small tasks for execution on Quartz. We evaluate Quartz in simulation, using an 8-chiplet design with 2,048 tiles and 824 MB of SRAM per chiplet, running six different iterative sparse applications from scientific computing and graph analytics. Quartz’s architecture, data partitioning techniques, and programming model together achieve gmean 26.2× speedup over the prior state-of-the-art programmable distributed-SRAM architecture.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163707</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials</title>
<link>https://hdl.handle.net/1721.1/163706</link>
<description>Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials
Gupta, Ayush Sagar
In the next several years and decades, the expanded use of artificial intelligence and edge computing will demand more powerful and energy-efficient electronics. Two-dimensional (2D) semiconductors, and in particular transition metal dichalcogenides (TMDs) such as molybdenum disulfide (MoS₂), are promising candidates for future field-effect transistors. TMDs can enable aggressive lateral and vertical device scaling, and they can add computing power density and new memory and sensing capabilities via 3D integration. However, several key challenges remain before 2D-channel transistors become commercially viable, including large contact resistances at the source and drain due to the van der Waals surface of 2D materials and the Fermi level pinning effect. A variety of methods have been explored to make ohmic contacts to MoS₂, the most promising of which so far is to use semimetals such as Bi and Sb, however these materials suffer from thermal instability. This thesis addresses these challenges by (1) exploring the ultimate limit of contact metal workfunction scaling to better understand the metal-MoS₂ interface, and (2) introducing a new method of reducing contact resistance to 2D materials by inserting dipole layers at the contact interface. Initial work on ultralow-workfunction (ULWF) metal deposition on MoS₂ and subsequent device fabrication is presented, though further study is required to mitigate effects from deposition equipment and the reactive nature of these metals. In parallel, the Janus TMD MoSSe is explored as an example system for dipole contacts, with extensive material characterization of the Janus TMD MoSSe being performed, and the effect of a dipole layer on the contact properties of FETs being established. Together, these results are a significant step towards solving one of the major hurdles for the commercial introduction of 2D-channel transistors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163706</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specialization of Vision Representations with Personalized&#13;
Synthetic Data</title>
<link>https://hdl.handle.net/1721.1/163705</link>
<description>Specialization of Vision Representations with Personalized&#13;
Synthetic Data
Chae, Nayoung (Julia)
Modern vision models excel at general purpose downstream tasks. It is unclear, however, how they may be used for personalized vision tasks, which are both fine-grained and data-scarce. Recent works have successfully applied synthetic data to general-purpose representation learning, while advances in Text-to-Image (T2I) diffusion models have enabled the generation of personalized images from just a few real examples. Here, we explore a potential connection between these ideas, and formalize the challenge of using personalized synthetic data to learn personalized representations, which encode knowledge about an object of interest and may be flexibly applied to any downstream task relating to the target object. We introduce an evaluation suite for this challenge, including reformulations of two existing datasets and a novel dataset explicitly constructed for this purpose, and propose a contrastive learning approach that makes creative use of image generators. We show that our method improves personalized representation learning for diverse downstream tasks, from recognition to segmentation, and analyze characteristics of image generation approaches that are key to this gain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163705</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Microservice Design Parameters</title>
<link>https://hdl.handle.net/1721.1/163704</link>
<description>Optimizing Microservice Design Parameters
Chen, Qihang
Production-level cloud services are increasingly deployed as microservices. An important question is given application logic, how to design an effective microservice architecture. Existing studies have underscored the importance of microservice cohesiveness and coupling, using these metrics to drive automatic design optimizations. However, they have not accounted for the potential impact that such design changes may have on overall system performance, which is confirmed by our case study. In this work, we present a system that can automatically identify microservice designs that are well-balanced across performance, coupling, and cohesiveness to meet cloud provider’s requirements. the system uses a multi-round dynamic programming approach, selectively identifies promising design candidates, generates the corresponding microservice code, measures and compares the results to ultimately determine the optimal design. The designs produced by our system typically achieve over 20% throughput improvement under the same QoS with less than a 10% increase in average LCOM, and often outperform the original benchmark architectures across all evaluated metrics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163704</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River</title>
<link>https://hdl.handle.net/1721.1/163703</link>
<description>City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River
Martínez Chapa, Daniela
Full of dichotomies, the Santa Catarina River is both dry and wet, present but forgotten, central yet disconnected, valued yet feared. How should an intermittent river in a dense urban context be regenerated? This thesis reimagines its ecological, hydrological, and public potential. Set in Monterrey, Mexico, this research addresses the urgent need to rethink water management in the face of the intensifying climate crisis through different urban systems and regeneration strategies within the river basin. Focusing on the Santa Catarina River, long dismissed as a plot, void, or threat, this work proposes how an intermittent river might be re-understood not as an absence of activities or function but as a space of seasonal abundance, ecological possibility, and urban interaction. Historically engineered for control, the river has been used as a flood channel, markets, sports complexes, transportation corridors, and more. However, rarely has it been seen, treated, or protected as a river. Through the development of a pilot zone, this research suggests a replicable framework of regenerative strategies to slow down, retain, and absorb water flows, supporting both dry and wet season dynamics. These include restoring riparian ecologies, reintroducing soft edges, enabling groundwater recharge, and designing permeable, public, and accessible urban interventions that reconnect the city with the riverbed. This thesis is not a fixed proposal but a living toolkit, an adaptable model to be tested, expanded, and reimagined in the pilot as time and nature take over. At stake is not only the river’s future but also the city’s capacity to shift from resistance to relation, becoming one with it, becoming a city in the river.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163703</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Banjiha Stories (2025)</title>
<link>https://hdl.handle.net/1721.1/163702</link>
<description>Banjiha Stories (2025)
Park, Habin
Banjiha are everywhere in Seoul. You don’t always see them—tucked below eye level, half-hidden underground—but they’re there. First built as military bunkers after the Korean War, later turned into last-resort housing, banjiha have become symbols of urban failure—spaces of neglect, flooding disasters, a problem to be erased. Both media portrayals and policy responses have advocated for their disappearance. But does removal truly protect the people who call these spaces home? This thesis moves beyond the idea that banjiha are simply failures of the city. Through three homes —three lives, it traces how these spaces are shaped, not only by policies and architecture but by the people who inhabit them. A home vulnerable to flooding, where protections exist—but not with the greatest risk. A place worn by time, held together by quiet repairs. A financial foothold in a city where affordable housing is disappearing. A space of temporary sacrifice. A shelter to return to, again and again. This is not just a story of risk or resilience, neglect or demolition. It is a story of how people live; how they adapt, negotiate, and make do in spaces that were never designed with them in mind. Rather than asking how to erase banjiha, this thesis asks: What can we learn by noticing them? What would it mean to shift the conversation—from removal to recognition, from assumption to understanding? To see these homes is to recognize not just their constraints, but the small interventions that could reshape them: a door that opens both ways so no one is trapped, policies that hold upstairs owners accountable for leaks, materials layered to prevent mold rather than mask it. Not grand reinventions, but deliberate shifts—openings for a different way forward. But before deciding what must change, we must first learn to see.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163702</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Inference for Inference Time Scaling of Language Models</title>
<link>https://hdl.handle.net/1721.1/163701</link>
<description>Probabilistic Inference for Inference Time Scaling of Language Models
Puri, Isha
Large language models (LLMs) have achieved significant performance gains via scaling up model sizes and/or data. However, recent evidence suggests diminishing returns from such approaches, motivating a pivot to scaling test-time compute. Existing deterministic inference-time scaling methods, usually with reward models, cast the task as a search problem, but suffer from a key limitation: early pruning. Due to inherently imperfect reward models, promising trajectories may be discarded prematurely, leading to suboptimal performance. We propose a novel inference-time scaling approach by adapting particle-based Monte Carlo methods. Our method maintains a diverse set of candidates and robustly balances exploration and exploitation. Our empirical evaluation demonstrates that our particle filtering methods have a 4–16x better scaling rate over deterministic search counterparts on both various challenging mathematical and more general reasoning tasks. Using our approach, we show that Qwen2.5-Math-1.5B-Instruct surpasses GPT-4o accuracy in only 4 rollouts, while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts. Our work not only presents an effective method to inference-time scaling, but also connects rich literature in probabilistic inference with inference-time scaling of LLMs to develop more robust algorithms in future work. Code, videos, and further information available at probabilistic-inference-scaling.github.io/
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163701</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Systematic Integration of Inverter-Based Resources in Electricity Markets</title>
<link>https://hdl.handle.net/1721.1/163700</link>
<description>Toward Systematic Integration of Inverter-Based Resources in Electricity Markets
Pierre, Jordina
This thesis introduces a multi-layer control architecture for inverter-based resources (IBRs), separating fast local feedback control from slower self-dispatch and system-level market coordination. Existing integration methods for IBRs limit their control flexibility and completely restrict their market participation potential. Two common practices include treatment of IBRs as negative loads and setting a fixed power factor during grid commissioning. Modeling IBRs as negative loads excludes them from dispatch coordination in electricity markets, significantly limiting incentive for contribution to grid reliability and flexibility. Likewise, a fixed power factor prevents the IBR from providing voltage support through reactive power absorption/injection. With a fixed power factor, constant real and reactive power limits are imposed on the inverter, even during voltage transients, ignoring the fact that an inverter’s available capacity can vary significantly due to internal current constraints and the power provided by the renewable energy source. To address the need for reactive power adjustment in IBRs and pave the way for their active participation in electricity markets , this work presents a coordinated control approach that enables IBRs to transition into active, self-dispatching participants. This thesis proposes a first layer hybrid PLL plus Q-V droop based controller in the first layer which governs millisecond-scale autonomous behavior, including low-voltage ride-through and real-time power adjustment based on voltage deviations at the point of common coupling and irradiance fluctuations from the renewable energy source, in this case solar. Given implementation from the first layer and predicted irradiance, Layer 2, which will be implemented in future work, uses a model predictive controller to provide bid functions for both real and reactive power while keeping voltage at the Point of Common Coupling within its limits. Finally, the third layer performs centralized market clearing through a security-constrained optimization by the system operator. By advocating for self-dispatched, constraint aware control, this thesis challenges the prevailing passive modeling paradigm and offers a structured, physics-informed alternative. It demonstrates how IBRs can evolve into reliable, market-integrated assets, enabling smarter renewable integration and a more resilient, cost-effective and decarbonized grid.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163700</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximations to worst-case data dropping: unmasking failure modes</title>
<link>https://hdl.handle.net/1721.1/163699</link>
<description>Approximations to worst-case data dropping: unmasking failure modes
Huang, Jenny Yijian
A data analyst might worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Checking this non-robustness directly poses a combinatorial optimization problem and is intractable even for simple models and moderate data sizes. Recently various authors have proposed a diverse set of approximations to detect this non-robustness. In the present work, we show that, even in a setting as simple as ordinary least squares (OLS) linear regression, many of these approximations can fail to detect (true) non-robustness in realistic data arrangements. We focus on OLS in the present work due its widespread use and since some approximations work only for OLS. Of the approximations that do not fail our tests, we find not only that a simple recursive greedy algorithm is the most conceptually straightforward but also that it can be orders of magnitude faster to run than the others.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163699</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics</title>
<link>https://hdl.handle.net/1721.1/163698</link>
<description>Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics
Darmawi-Iskandar, Patrick
Rising global energy demands, driven by the advent of artificial intelligence (AI), cloud computing, and Internet of Things (IoT) devices, underscore the need for more efficient power electronics. In particular, power switches based on wide bandgap semiconductors such as gallium nitride (GaN) have emerged as promising alternatives to traditional silicon devices for low-voltage (10-100 V) applications. This work investigates the design, fabrication, and scaling of p-GaN-gate highelectron-mobility transistors (HEMTs). A p-GaN-gate epitaxial structure was developed with considerations for short channel effects. A self-aligned, gate-first process employing tungsten metallization was implemented to enable gate lengths as small as 100 nm. Device scaling was studied systematically, revealing the importance of gate aspect ratio and gate-to-drain spacing in managing short channel effects and maintaining breakdown voltage. Electrical characterization showed strong device performance, although contact resistance accounted for a substantial portion of total on-resistance. To address this, a modified fabrication approach incorporating regrown contacts was introduced, resulting in reduced contact resistance and improved overall device characteristics. The combined results highlight practical strategies for enhancing the performance and scalability of p-GaN-gate HEMTs for next-generation low-voltage power electronics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163698</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62</title>
<link>https://hdl.handle.net/1721.1/163697</link>
<description>Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62
Li, Tien Yi
This thesis is a history of diary-writing in China from 1918 through 1961. Diaries are an increasingly popular but still inadequately understood primary source for historians of modern China. Previous scholars have suggested that, in the twentieth century, diary-writing became increasingly popular due to Japanese and Soviet influences, the increasing availability of manufactured blank diaries, and ruling governments that used diary-writing as a way of enforcing ideological conformity. This thesis traces an alternative history, starting from the popularization of published diaries in Shanghai in the long 1920s; to diaries’ emergence as a recognizable genre that could discoursed be theorized; to the moment the genre gained its reputation as a kind of self-expression par excellence; to its widespread inclusion into school curricula; to loosely connected attempts on the part of educators to delimit a normative way of diarywriting that, ironically, increasingly regimented self-expression. In doing so, this thesis contributes to the existing historiography by offering three correctives: I argue that 1) the initial proliferation of diaries was economically––not ideologically––motivated, 2) the popularization of diary-writing was not a concerted effort orchestrated by China’s political leaders but at best a loosely connected effort led by a middling class of educators, textbook writers, and intellectuals, and 3) diary-writing was not only regimented by communist ideology in the Maoist era but shifting moral principles and anxieties throughout the twentieth century. All in all, this thesis demonstrates the value of diaries for studying moral knowledge, epistemologies, and anxieties at the grassroots in midcentury China.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163697</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Image of the Tunnels: Mapping Perception of the MIT Underground</title>
<link>https://hdl.handle.net/1721.1/163696</link>
<description>The Image of the Tunnels: Mapping Perception of the MIT Underground
Ravichandran, Shruthi
Kevin Lynch’s influential book, The Image of the City, proposes five elements by which residents of a space create mental maps of their neighborhood and use these to define their spatial perception and navigation: paths, edges, districts, nodes, and landmarks. The MIT Tunnels are spaces utilized daily for a myriad of purposes: to reach labs and offices, to avoid slow-moving tourist traffic and biting Boston cold, and to explore MIT’s iconic hacking spots. This work exploresif Lynchian principles apply to these pseudourban underground spaces and culminates in a GeoGuessr-inspired virtual game where students can test and grow their knowledge of tunnel navigation. The hypotheses tested in this thesis project extend Lynch’s framework to relevant tunnel analogs: familiar paths, districts (clusters of buildings and departments), tunnel landmarks, and cross-level relationships between above- and underground mental maps. These hypotheses were tested via preliminary surveys on MIT students. Once completed, the subsequent experiments involved two games - one physically in the tunnels, one online with images of the tunnels gathered with a 360-camera. The games involved having participants navigate to a target building from a starting point. After the in-person game was completed, participants answered a series of questions about their route. These races offered information about familiar paths, landmarks, and strategies participants used to navigate the tunnels. Results from this game confirmed conclusions drawn from preliminary surveys that Lynchian principles do extend to the tunnels via relevant analogs, and above-ground knowledge and connection points offered even more information than Lynch’s five principles alone. Students consistently rely on heavily traveled paths, navigating through familiar districts, and using above ground knowledge to traverse in unknown underground buildings. This work can be extended to help grow students’ understanding of these tunnels, fostering further creativity and student expression in this complex network of spaces.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163696</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners</title>
<link>https://hdl.handle.net/1721.1/163695</link>
<description>Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners
Koo, Jaehyun
This thesis contributes to the burgeoning field of batch-dynamic parallel algorithms by presenting parallel batch-dynamic graph algorithms for coreness decomposition and spanners, as well as a number of other related problems. The first class of problems we consider involves approximating coreness decomposition and several closely related concepts, such as (subgraph) density estimation, arboricity estimation, and low out-degree orientations. These are extremely useful structures for organizing graphs based on their density. Our algorithms process any batch of edge insertions and deletions in polylogarithmic depth while using work that is linear in the batch size (up to logarithmic factors), in the worst case. The second class of problems we consider concerns graph spanners. Over the past two to three decades, graph sparsifications that approximately preserve key graph properties have become essential tools in algorithm design. In particular, spanners—reducing the number of edges while approximately preserving pairwise distances—have been widely studied. We present the first such algorithms for computing and maintaining spanners. These algorithms achieve near-optimal amortized runtime—processing each batch in polylogarithmic depth with work nearly linear in the batch size for any number of processors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163695</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation</title>
<link>https://hdl.handle.net/1721.1/163694</link>
<description>Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation
Fey, Nolan
Achieving athletic loco-manipulation on robots requires moving beyond traditional tracking rewards—which simply guide the robot along a reference trajectory—to task rewards that drive truly dynamic, goal-oriented behaviors. Commands such as “throw the ball as far as you can” or “lift the weight as quickly as possible” compel the robot to exhibit the agility and power inherent in athletic performance. However, training solely with task rewards introduces two major challenges: these rewards are prone to exploitation (reward hacking), and the exploration process can lack sufficient direction. To address these issues, we propose a two-stage training pipeline. First, we introduce the Unsupervised Actuator Net (UAN), which leverages real-world data to bridge the sim-to-real gap for complex actuation mechanisms without requiring access to torque sensing. UAN mitigates reward hacking by ensuring that the learned behaviors remain robust and transferable. Second, we use a pre-training and fine-tuning strategy that leverages reference trajectories as initial hints to guide exploration. With these innovations, our robot athlete learns to lift, throw, and drag with remarkable fidelity from simulation to reality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163694</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Wound Designates a Subject</title>
<link>https://hdl.handle.net/1721.1/163693</link>
<description>A Wound Designates a Subject
Lum, Luca E.
What haunts when haunting itself has been foreclosed? This thesis develops “ghostlessness” as a conceptual and aesthetic framework across my work in moving image, drawing, and writing. Ghostlessness refers to conditions that suppress haunting where it would otherwise emerge or be felt. Drawing from theoretical elaborations of hauntology, where the present is understood as structured by both suppressed pasts and unrealized futures, ghostlessness names the absence—or foreclosure—of that temporal disruption. It marks a contemporary condition in which systems oriented toward predictive governance and managed futurity preemptively neutralize rupture, sealing wounds before they can fester, reroute, or become sites of transformation. Through the works gathered here, I explore how ghostlessness functions not simply as absence but as affective and infrastructural suppression—rendering the spectral illegible, unaddressable, or unreal. Against this, my practice seeks to recapture the value of haunting in death-ridden, crisis-laden times where its presence is more prevalent than ever – hence its management, erasure, and suppression: ghostlessness.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163693</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stylizing 3D Models With Generative AI for Fabrication</title>
<link>https://hdl.handle.net/1721.1/163692</link>
<description>Stylizing 3D Models With Generative AI for Fabrication
Tejedor, Leandra
This thesis presents two novel approaches for modifying 3D models using generative AI for stylization while ensuring the resulting models preserve the properties required for fabrication. The first method, Style2Fab, separates functional and stylistic sections of 3D models to enable targeted modifications that preserve the model's intended functionality. By distinguishing between these sections, Style2Fab allows for alterations that maintain the model's functional purpose while providing flexibility in its aesthetic design. This approach ensures that the modified models retain their original functionality after stylistic changes.&#13;
&#13;
The second method, MechStyle, incorporates finite element analysis (FEA) into the generative modeling pipeline to maintain the structural integrity of the modified models. By analyzing changes in stress values during a simulated drop test at various stages of the stylization process, MechStyle restricts changes to those that preserve the model's structural viability. This ensures that the resulting models are both stylistically accurate to the user's desired results and structurally sound for 3D printing.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163692</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Limits of Recovering Planted Subgraphs</title>
<link>https://hdl.handle.net/1721.1/163691</link>
<description>The Limits of Recovering Planted Subgraphs
Rajaraman, Amit
Given an arbitrary subgraph H = Hₙ and p = pₙ ∈ (0, 1), the planted subgraph model is defined as follows. A statistician observes the union of the “signal,” which is a random “planted” copy H* of H, together with random noise in the form of an instance of an Erdős–Rényi graph ´ G(n, p). Their goal is to then recover the planted H* from the observed graph. Our focus in this work is to understand the minimum mean squared error (MMSE), defined in terms of recovering the edges of H*, as a function of p and H, for large n. A recent paper [MNS⁺23] characterizes the graphs for which the limiting (as n grows) MMSE curve undergoes a sharp phase transition from 0 to 1 as p increases, a behavior known as the all-or-nothing phenomenon, up to a mild density assumption on H. However, their techniques fail to describe the MMSE curves for graphs that do not display such a sharp phase transition. In this paper, we provide a formula for the limiting MMSE curve for any graph H = Hₙ, up to the same mild density assumption. This curve is expressed in terms of a variational formula over pairs of subgraphs of H, and is inspired by the celebrated subgraph expectation thresholds from probabilistic combinatorics [KK07]. Furthermore, we give a polynomial-time description of the optimizers of this variational problem. This allows one to efficiently approximately compute the MMSE curve for any dense graph H when n is large. The proof relies on a novel graph decomposition of H as well as a new minimax theorem which may be of independent interest. Our results generalize to the setting of minimax rates of recovering arbitrary monotone boolean properties planted in random noise, where the statistician observes the union of a planted minimal element A ⊆ [N] of a monotone property and a random Ber(p)^⊗N vector. In this setting, we provide a variational formula inspired by the so-called “fractional” expectation threshold [Tal10], again describing the MMSE curve (in this case up to a multiplicative constant) for large n.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163691</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network</title>
<link>https://hdl.handle.net/1721.1/163690</link>
<description>Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network
Liu, Ziqian
As modern communication systems increasingly rely on centralized network infrastructure, they become more vulnerable to disruptions caused by disasters, failures, or cyberattacks. To address this risk, CityMesh proposes a decentralized fallback wireless network that leverages existing Wi-Fi devices, such as access points (APs), in buildings to maintain essential connectivity during outages. However, achieving scalable and reliable message delivery in such a network, without introducing excessive overhead, poses significant challenges. This thesis presents a new routing protocol for CityMesh, designed to operate efficiently at city scale. We first identify the limitations of traditional shortest-path source routing in CityMesh’s context, including the use of unreliable links and overhead from redundant transmissions. To address these issues, we introduce a safer path selection metric that prioritizes link reliability, a waypoint-based routing compression scheme, and a conduit mechanism to increase robustness to local failures. Our protocol further supports compact routing tables through a grid-based addressing scheme, enabling constant-size packet headers and scalable routing decisions. Additionally, we propose a suppression strategy to reduce unnecessary transmissions both between and within buildings. Finally, we extend our approach to reconnect disconnected network segments by formulating a relay placement strategy based on map data and geometric heuristics. Additionally, to reconnect fragmented network segments, we develop a practical relay placement algorithm by leveraging on the convex hull optimization and re-using global map knowledge, which ensures fast relay point computation in feasible locations such as roads and bridges. Simulations across 20 global cities show that our routing protocol achieves up to 2× higher packet delivery rates and reduces transmission overhead by up to 28× compared to GPSR under high packet loss and realistic localization error. The routing table footprint sampled across 4 randomly selected cities shows on average under 2 KB memory usage per device. Our fast relay placement algorithm also demonstrates only a small number of relays are needed to achieve full network connectivity for most of the cities, which validates CityMesh’s core premise that existing urban Wi-Fi infrastructure is sufficient to support a robust, scalable decentralized fallback network with minimal augmentation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163690</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>GPU-accelerated Inference for Discrete Probabilistic Programs</title>
<link>https://hdl.handle.net/1721.1/163689</link>
<description>GPU-accelerated Inference for Discrete Probabilistic Programs
Ghavami, Matin
This thesis presents a comprehensive approach to GPU-accelerated inference for discrete probabilistic programs.  We make two key contributions : (1) a factor graph IR implemented in JAX that supports variable elimination and Gibbs sampling, and (2) a modeling DSL with a compiler that lowers programs to the factor graph IR. Our system enables significant performance optimizations through static analysis of the factor graph structure. Variable elimination is optimized by reduction to tensor contraction with optimized contraction paths, while Gibbs sampling is automatically parallelized through graph coloring techniques. Empirical evaluations on standard benchmarks demonstrate orders of magnitude performance improvements over existing systems, with the parallelized Gibbs sampler showing speed-ups of up to 144x on Bayesian networks and even greater improvements for models with regular graph topologies such as Ising models and hidden Markov models.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163689</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures</title>
<link>https://hdl.handle.net/1721.1/163688</link>
<description>The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures
Hernandez-Cornejo, Mark A.
This thesis is concerned with DIY "off-the-cloud" networks as socio-technical models that can reinscribe a community's organizational processes, identity, and culture. It questions how these networks can break away from corporate and extractive services of "the cloud" in order to achieve digital sovereignty as well as resist the hegemonic understanding of Western universal technology. Rather than grafting an outside network onto a community, how might the nodes of a network emerge from the cultural ontologies and local knowledge systems, creating a "vernacular cloud," with political, epistemic, and ontological implications? The social practice of what I call 'net/work' involves the facilitation of local digital territories that create a grassroots politics of "organic internets." In Chapter One, recent attempts to break from monopolized services like Google and Facebook are examined, providing insight into why these networks are formed and how they “de-link” from “the cloud.” Drawing from Walter Mignolo's understanding of "de-linking," the thesis argues that this process is a political project that is also epistemologically and economically non-western. Chapter Two examines the notion of 'community' in community networks through the lens of grassroots organizing such as mutual aid, delving into the care and maintenance required for system administration. Chapter Two builds on Geri Augusto's understanding of "re/trans" as a project that has developed new assemblages of knowledge and integrated them into different landscapes. It examines community networks from the Global South, where network nodes have the potential to be cosmo-ontological. Chapter Three provides examples of the principles outlined in Chapters One and Two from my work in pursuit of technical autonomy within an organization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163688</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings</title>
<link>https://hdl.handle.net/1721.1/163687</link>
<description>Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings
Lesina-Debiasi, Simon
Building operations and the construction sector are one of the largest contributors to global carbon emissions and energy consumption. While novel construction materials and insulation offer lower embodied carbon solutions, improved heating and cooling devices offer cost and energy effective building services. Above all, “smart” devices promise remote control, oversight, and optimization of building operations. With the rising implementation of AI solutions to every sector, it is important to see the digital devices as an interface to the material machinery they are connected to. The way through which we are introduced to these systems as solutions to environmental problems leaves out the operational and infrastructural costs of the devices. Making material design decisions that are conscious of the mining operations that source the rare earth minerals, to the pumping of oil for polymer coatings, to the chemical baths that separate it from the ore, all the way to the hard drives in server rigs that are cooled with water and driven by electricity, the cloud is nothing but materiality and resources. When evaluating buildings operations and construction techniques for sustainability considerations and environmental impact, connected services such as data networks and optimizations that rely on large server infrastructures and cloud computing are not part of the scope. This thesis reveals the missing components of energy evaluations in “smart” devices within the walls, floors, windows, doors, and roofs of our building, to create a framework through which building efficiency and sustainability can be reconsidered. Through historic research, literature reviews, and experiments, this work shines some light on the environmental impact of data infrastructure to which our buildings are connected. The work presented in this thesis does not claim to be comprehensive nor to solve the problem of optimizing buildings for energy efficiency. Instead, the goal is to build upon existing and established research on data infrastructure, smart technology, climate research etc. showing that, while the efforts currently taken might be improving the efficiency in a building on-site, considerations that are impacting the energy consumption off site need to be taken into consideration.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163687</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies</title>
<link>https://hdl.handle.net/1721.1/163686</link>
<description>Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies
Ramirez Cuebas, Adriana
Urban landscapes are increasingly recognized as critical to climate mitigation, yet remain underrepresented in carbon accounting frameworks relative to buildings and infrastructure. This thesis advances landscape carbon assessment by introducing a typology-based Life Cycle Assessment (LCA) framework for landscape architecture. &#13;
The framework integrates anthropogenic emissions and natural carbon dynamics while addressing uncertainty. It proceeds through three layers of analysis: 1) developing landscape system and project categories for carbon footprint benchmarking, 2) benchmarking the performance of the proposed landscape systems and urban typologies; and 3) assessing the mitigation potential of decarbonization strategies across systems and project types.&#13;
Concrete pavers on reinforced concrete slabs and asphalt pavements (78 to 104 kgCO₂e/m²) are the most carbon intensive in the production-to-construction stage. Turfgrass and shrubs show wide variability, functioning as sources or sinks depending on species mix, maintenance, and flux magnitudes, underscoring the need for species-specific, ecologically dynamic modeling (-21 to 42 kgCO₂e/m² and -35 to 258 kgCO₂e/m²). Canopy systems act as consistent carbon sinks (-611 to -388 kgCO₂e/m² over 50 years) despite significant emissions from transportation and structural soil.&#13;
Landscape systems were used to benchmark four urban typologies—streetscapes, plazas, courtyards, and urban parks. Their 50-year carbon footprints range from –80 to 21 kgCO₂e/m² in urban parks, –13 to 63 in courtyards, 22 to 79 in plazas, and 3 to 80 in streetscapes. Applying decarbonization strategies makes all typologies achieve net carbon sink status at the high bound. Urban parks achieve neutrality immediately post-construction, courtyards in 13 years, plazas in 26 years, and streetscapes by year 33. At higher emission estimates, urban parks and courtyards deepen carbon sink performance, plazas cross into net sink territory, and streetscapes approach neutrality. The detailed findings highlight the influence of planting density, maintenance regimes, and land cover composition.&#13;
By structuring assessment around land covers and urban typologies, this thesis delivers a transferable carbon accounting framework aligned with design practice, offering actionable insights for embedding climate accountability into landscape architecture and public policy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163686</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction</title>
<link>https://hdl.handle.net/1721.1/163685</link>
<description>Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction
Pahl, David
This thesis investigates the simulation and design of the hardware architecture required for large‑scale quantum error correction (QEC). Specifically, we design microwave circuits for fast and high‑fidelity readout and devise a long‑range coupler (LRC) that spans five qubit lattice sites, suitable for low‑overhead quantum low‑density parity‑check (qLDPC) codes [1]. We present a prototypical nine‑qubit qLDPC code incorporating two long‑ range couplers and optimized readout circuits, achieving state‑of‑the‑art readout fidelities of up to 99.63% in 56 ns and demonstrating strong, well‑targeted couplings mediated by the LRC. Our simulations employ an efficient microwave abstraction based on ABCD transfer matrices, modeling complete qubit devices as networks of circuit elements. We use this formalism to develop a closed‑loop optimization algorithm that determines optimal readout parameters in seconds. The ABCD framework also accurately captures the multi‑mode behavior of the LRC, offering a valuable tool for developing large‑scale, low‑ overhead QEC devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163685</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cost-Based Optimization for Semantic Operator Systems</title>
<link>https://hdl.handle.net/1721.1/163684</link>
<description>Cost-Based Optimization for Semantic Operator Systems
Russo, Matthew D.
Recently, AI developers have turned to modular AI systems in order to achieve state-ofthe-art performance on challenging benchmarks and industry problems. New programming frameworks have enabled developers to build these systems by composing them out of semantic operators—i.e., LLM-powered maps, filters, joins, aggregations, etc.—inspired by relational operators from data management systems. While these systems of semantic operators can achieve strong performance on benchmarks, they can be difficult to optimize. For example, an optimizer may need to determine which model, prompting strategy, and retrieval mechanism to use for each operator. Existing optimizers are limited in the number of optimizations they can apply, and most (if not all) cannot optimize system quality, cost, or latency subject to constraint(s) on the other dimensions. In this thesis, we build an extensible, cost-based optimizer called Abacus, which searches for the best implementation of a semantic operator system given a (possibly constrained) optimization objective. The optimizer estimates operator performance by leveraging a minimal set of training examples and, if available, prior beliefs about operator performance. We evaluate the optimizer on a range of workloads including biomedical multi-label classification (BioDEX), information extraction from legal contracts (CUAD), and multi-modal question answering (MMQA). We demonstrate that systems optimized by our work achieve 18.7%-39.2% better quality and up to 23.6x lower cost and 4.2x lower latency than the next best system.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163684</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games</title>
<link>https://hdl.handle.net/1721.1/163683</link>
<description>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games
Pipis, Charilaos
We propose efficient no-regret learning dynamics and ellipsoid-based methods for computing linear correlated equilibria—a relaxation of correlated equilibria and a strengthening of coarse correlated equilibria—in general convex games. These are games where the number of pure strategies is potentially exponential in the natural representation of the game, such as extensive-form games. Our work identifies linear correlated equilibria as the tightest known notion of equilibrium that is computable in polynomial time and is efficiently learnable for general convex games. Our results are enabled by a generalization of the seminal framework of Gordon et al. [2008] for Φ-regret minimization, providing extensions to this framework that can be used even when the set of deviations Φ is intractable to separate/optimize over. Our polynomial-time algorithms are similarly enabled by extending the Ellipsoid-Against-Hope approach of Papadimitriou and Roughgarden [2008] and its generalization to games of non-polynomial type proposed by Farina and Pipis [2024a]. We provide an extension to these approaches when we do not have access to the separation oracles required by these works for the dual player. This work will appear in STOC 2025, [Daskalakis et al., 2025].
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163683</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees</title>
<link>https://hdl.handle.net/1721.1/163682</link>
<description>Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees
Lange, Jane
We present algorithms for finding two types of objects that explain the classification of a black-box model f : {±1}^d → {±1} on an instance x ∈ {±1}^d . The first is a certificate: a small set of x’s features that in conjunction essentially determines f(x). The second is a counterfactual: a nearest instance x′ for which f(x′) ≠ f(x). We obtain both algorithms via a connection to the problem of implicitly learning decision trees. The implicit nature of this learning task allows for efficient algorithms even when the complexity of f necessitates an intractably large surrogate decision tree. We solve the implicit learning task by bringing together techniques from learning theory, local computation algorithms, and complexity theory. Our approach of “explaining by implicit learning” shares elements of two previously disparate methods for post-hoc explanations, global and local explanations, and we make the case that it enjoys advantages of both. Our certification algorithm runs in time poly(d, C(f)) and outputs a certificate of size poly(C(f)), where C(f) is the “average certificate complexity" of f. Our counterfactual algorithm runs in time S(f)^[O(∆f (x))] ·log d, where S(f) is the sensitivity of f (a discrete analogue of the Lipschitz constant) and ∆f (x) is the distance from x to its nearest counterfactual. We further prove a lower bound of S(f)^[Ω(∆f (x))] + Ω(log d) for finding counterfactuals, thereby showing that the guarantees of our algorithm are essentially optimal.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163682</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices</title>
<link>https://hdl.handle.net/1721.1/163681</link>
<description>Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices
Lee, Jungsoo
As the demand for computation in neural networks continues to rise, conventional computing resources are increasingly constrained by their limited energy efficiency. One promising solution to this challenge is analog in-memory computing (AIMC), which enables efficient matrix-vector multiplications by encoding synaptic weights into the conductance of nonvolatile memory devices. These devices are structured into crossbar arrays. To explore the potential of non-volatile memory devices in AIMC, investigations involve simulating crossbar array operations using IBM’s AIHWKIT. With this tool, I investigate the implementation of various analog computing algorithms, including TikiTaka. AIMC is evaluated for simple MNIST classification tasks and more complex deep learning models, Long Short-Term Memory (LSTM) networks. I demonstrate that devices can be categorized based on their asymmetry and non-linear weight modulation behavior. Performance improvements through the Tikitaka algorithm are observed only when the device provides a sufficient converge-dragging force; otherwise, the algorithm may even degrade performance. I also investigate how pulse-to-pulse noise and device-to-device variability affect system performance, as well as how different peripheral circuit configurations influence the overall behavior. Finally, I propose an Analog Low-Rank Adapter (Analog LoRA) by applying analog computing to the fine-tuning of large language models. I explore the necessary conditions for Analog LoRA to achieve performance comparable to its digital counterpart. Based on these findings, I present design guidelines for effectively applying analog computing to various machine learning tasks on edge devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163681</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides</title>
<link>https://hdl.handle.net/1721.1/163680</link>
<description>CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides
Jiao, Yixuan
Two-dimensional transition metal dichalcogenides (TMDs) such as monolayer MoS₂ offer great promise for next generation nanoelectronics due to their atomic thickness, tunable bandgaps, and excellent electrostatic control. However, industrial semiconductor manufacturing demands CMOS-compatible, wafer-scale growth and conventional CVD methods often exceed thermal budgets and introduce contaminants, while achieving uniform, defect-free monolayers remain difficult. This thesis presents in-depth discussion on low-temperature MOCVD system design and optimization methodology for uniform monolayer TMD synthesis. We investigate the effect of alkali halide promoters (e.g. NaCl) and novel alkali-free promoters (e.g. NH4Cl and crystal violet) on synthesis of monolayer MoS₂. By optimizing the NaCl-promoted route, we achieve coalesced monolayer MoS₂ films with enlarged grain domains and demonstrate field-effect transistors with improved mobility. In parallel, we develop a CMOS-compatible crystal violet seeding method that avoids alkali metal contaminants and yields uniform monolayer coverage. To support process development, a rapid characterization pipeline was introduced: optical/SEM imaging combined with machine learning to quickly map thickness, grain size, and infer electronic quality across the wafer. These contributions collectively advance the integration of 2D TMD materials into CMOS fabrication, enabling monolithic 3D integration in future electronics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163680</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automating the Search for Artificial Life with Foundation&#13;
Models</title>
<link>https://hdl.handle.net/1721.1/163679</link>
<description>Automating the Search for Artificial Life with Foundation&#13;
Models
Kumar, Akarsh
With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. Artificial Life (ALife) has not yet integrated FMs, thus presenting a major opportunity for the field to alleviate the historical burden of relying chiefly on manual design and trial-anderror to discover the configurations of lifelike simulations. This paper presents, for the first time, a successful realization of this opportunity using vision-language FMs. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) discovers simulations that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse simulations. Because of the generality of FMs, ASAL works effectively across a diverse range of ALife substrates including Boids, Particle Life, Game of Life, Lenia, and Neural Cellular Automata. A major result highlighting the potential of this technique is the discovery of previously unseen Lenia and Boids lifeforms, as well as cellular automata that are open-ended like Conway’s Game of Life. Additionally, the use of FMs allows for the quantification of previously qualitative phenomena in a human-aligned way. This new paradigm promises to accelerate ALife research beyond what is possible through human ingenuity alone.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163679</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty-aware Joint Physical Tracking and Prediction</title>
<link>https://hdl.handle.net/1721.1/163678</link>
<description>Uncertainty-aware Joint Physical Tracking and Prediction
Dasgupta, Arijit
Humans possess a remarkable capacity to track and predict the motion of objects even when visual information is temporarily absent. This thesis investigates how missing sensory evidence—such as during occlusion—alters current and future beliefs about object motion, and introduces an uncertainty-aware framework to model this process. A behavioral experiment was conducted in which participants continuously predicted the future destination of a ball moving in 2.5D environments with occlusion. Results demonstrate that participants dynamically updated their predictions throughout occlusion, exhibiting adaptive belief revision and physically grounded reasoning. To model this behavior, a structured Bayesian modeling and inference approach for joint tracking and prediction was developed that integrates perception, state estimation, and future prediction in a unified process. The approach, implemented via a Sequential Monte Carlo algorithm embedded within a GPU-accelerated and parallel probabilistic programming system, maintains time-varying beliefs over both present and future object states, conditioned on observed images. These belief states are explicitly represented in symbolic form, enabling interpretable, frame-by-frame introspection of uncertainty and prediction over time. When compared against human responses, the model closely matched the temporal evolution of time-aligned decisions and outperformed plausible alternative hypotheses that failed to reason during occlusion. These findings affirm that the absence of changing visual evidence does not engender a void in physical reasoning, but is evidence in itself—processed and revised through structured, probabilistic inference. By integrating probabilistic programming with human behavioral data through structured Bayesian modeling and inference, this thesis advances a computational account of intuitive physical reasoning and provides a foundation for building interpretable, uncertainty-aware AI systems that mirror human-like physical intelligence.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163678</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward an Age-Ready Suburbia</title>
<link>https://hdl.handle.net/1721.1/163677</link>
<description>Toward an Age-Ready Suburbia
Du, Minghao; Zhuang, Kaicheng
As America’s population ages, suburban neighborhoods face urgent challenges. Originally designed for young, car-dependent families, the suburban landscape today often presents barriers to aging in place, including poor walkability, inaccessible housing, and limited access to essential services and care. This thesis investigates these challenges and proposes a strategy for reimagining suburban environments through demographic analysis, spatial mapping, persona-driven research, architectural prototyping, and community planning. It traces the historical evolution of suburbia, critically evaluates existing senior housing typologies, and advances new frameworks for retrofitting residential neighborhoods to better support aging populations. Focusing on Sacramento, California, the research identifies high-priority areas where aging, affordability challenges, and mobility barriers intersect. Grounded by a pilot care home project, the study demonstrates how modest interventions, such as retrofitting single-family homes into small-scale residential care environments, can enhance both livability and care access. The first phase of the pilot project has been constructed, offering a demonstration of the proposed model’s feasibility. A phased development and financial strategy are also outlined to ensure broader applicability. While rooted in Sacramento, the thesis offers a framework relevant to many suburban contexts across the United States, particularly naturally occurring retirement communities (NORCs) where older adults are aging in place. Rather than creating isolated senior enclaves, the work promotes a distributed, community-integrated model that strengthens neighborhood resilience and supports intergenerational living. By combining design innovation with policy awareness and development feasibility, the thesis presents a scalable and adaptable approach to reshaping suburbs for an aging society.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163677</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction</title>
<link>https://hdl.handle.net/1721.1/163676</link>
<description>Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction
Pahl, Lukas
The ability to coherently and reliably manipulate quantum information marks a fundamental technological leap—realizable through a universal, fault‑tolerant quantum computer. Achieving this goal requires progress across all layers of the quantum computing stack, from physical qubits to theoretical algorithms. In this work, we address multiple layers of this stack. We develop a software architecture for scalable device calibration using modular calibration graphs. We introduce real‑time frequency stabilization techniques, demonstrating improved single‑qubit gate fidelities and progress toward multiqubit feedback. Finally, we explore how quantum error correction overhead can be reduced using low‑density parity‑check codes. We present logical protocols for a non‑local nine‑qubit code, which significantly outperforms comparable surface code implementations in both qubit efficiency and computational capability. These results represent practical steps toward overcoming key challenges in fault‑tolerant quantum computing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163676</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>ModelDiff: A Framework for Comparing Learning Algorithms</title>
<link>https://hdl.handle.net/1721.1/163675</link>
<description>ModelDiff: A Framework for Comparing Learning Algorithms
Shah, Harshay
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations, i.e., input transformations that change the predictions of models trained with one learning algorithm but not the other. We then present ModelDiff, a method that leverages the datamodels framework (Ilyas et al., 2022) to compare learning algorithms based on how they use their training data. We demonstrate ModelDiff through three case studies, comparing models trained with/without data augmentation, with/without pre-training, and with different SGD hyperparameters. Our code is available at https://github.com/MadryLab/modeldiff.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163675</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sanctuary for Who?</title>
<link>https://hdl.handle.net/1721.1/163581</link>
<description>Sanctuary for Who?
Salazar, Juan
Philadelphia, often recognized as the poorest major city in the United States, became a Sanctuary City in 2014. The designation committed the region to policies limiting cooperation with federal law enforcement in the persecution of undocumented communities. Policies have ranged from refusing to detain individuals without judicial warrants to prohibiting Immigration and Customs Enforcement (ICE) from accessing municipal databases or facilities for detention purposes. At the community level, the notion of the Sanctuary City sought to promote organizing against unlawful persecution of residents. Over the past eleven years, however, the framework of protection it promised has faltered under mounting federal pressure. The Sanctuary City's symbolic authority and limited scope have failed to shield residents from persecution or restrict ICE's intensifying operations within the area. In 2019, Juntos, the city's foremost immigrant advocacy organization, criticized Philadelphia's Sanctuary status as inadequate. Cited the ongoing persecution of its communities and the declining quality of life for all residents, the organization urged the city to abandon the term "Sanctuary." They instead petition the city to focus instead on meaningfully protecting all residents of Philadelphia, stating, "Let us instead work together to build the kind of city we all want to live in." Junto's critique forms the basis of this thesis, using it as an invitation to reimagine the Sanctuary City as a shift from a policy framework toward a general ethic and design sensibility. This thesis proposes that Philadelphia's crux, like all cities, lies in its ability to sustain communities' pursuit of a dignified life. As a primary agent in the formation of cities, the architect must then make this struggle their own and deploy the tools of their discipline to protect life and inspire dignity. By framing Philadelphia as a city shaped by deindustrialization, disinvestment, and policing, the thesis explores how architecture can respond to these forces by reviving the city's industrial character and establishing new boundaries able to safeguard community rights. Integrating legal, spatial, and semantic insights from federal authorities' rules of engagement will provide novel typologies and programs for the city that address its systemic inequities while fostering environments where life and dignity can flourish. By inscribing meaningful boundaries, and re-equipping the city to make for itself, the thesis suggests architecture becomes a tool for collective protection and urban regeneration.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163581</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts</title>
<link>https://hdl.handle.net/1721.1/163580</link>
<description>Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts
Hirt, Natasha K.
To meet the needs of growing populations, rates of new construction are increasing at a record pace worldwide. The built environment, already one of the single largest contributors to global CO₂e emissions, will become a significant environmental challenge in the coming decades. To mitigate the anticipated environmental impact of future construction, we need to rethink how we build.&#13;
&#13;
One strategy, which is the subject of this work, is improving the material efficiency of flexural systems like floors. Floors are among the most materially wasteful structural components in buildings, and while decades of research have explored optimal floor system design, the complexity of proposed solutions has limited their practical implementation. Furthermore, the industrial tools available to structural designers do not lend themselves to flexible experimentation or large-scale analysis. As a result, most flexural systems today rely on approximations and rules of thumb rather than mathematically optimal designs, data-driven decision making, or iterative design processes.&#13;
&#13;
This thesis bridges the gap between practical engineering, material efficiency, and design freedom. It presents novel, code-compliant tools for the computational analysis and optimization of flat slabs supported by a network, or grillage, of beams, using a model system of reinforced concrete supported by steel W-sections. The method is used to perform a large-scale analysis of 24,192 unique combinations of beam topologies and assembly design decisions. The results of this analysis find improvements in structural embodied carbon of up to 53.4% over the business-as-usual design case, and also yield generalizable takeaways about the key factors influencing material efficiency in floor slabs. &#13;
&#13;
One of the advantages of the method is its flexibility in taking on a range of complex design challenges. These are presented as extensions to the method, and include designing with a constrained inventory for a series of real-world case studies, and automatically deriving novel structural geometries from dense ground structures.&#13;
&#13;
The method and results shown in this thesis expand the range of analysis tools that engineers have access to, enabling a wide range of creative designs and explicitly linking design decisions to environmental impact.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163580</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inscrutability: An Epistemological Experiment</title>
<link>https://hdl.handle.net/1721.1/163579</link>
<description>Inscrutability: An Epistemological Experiment
Huang, Brian Hudson
Through four different projects, this thesis explores the idea of dimensions of representation, a concept introduced by 20th century French philosopher Michel Foucault in his book The Order of Things. Foucault argues that the Classical episteme, which Foucault defines as the discourse surrounding knowledge-making that lasted from the 17th century to the 19th century, was determined by the idea of dimensions of representations. Dimensions of representations states that during the Classical episteme, knowledge was formulated by representations of the external world, such as through systems of classification, ordering, and relations, rather than through resemblance. The first project, Holes in the Sieve (2023) will address the problematics of classification through a infamous case in the history of paleoanthropology: the Piltdown Man. The second project, Contrapposto in Space (2024) addresses how representation has been instrumentalized in technoscience through space research. Finally, the last two projects, the Poem Box (2024) and Micropoetry (2025) posit a way forward at the limits of representation by engaging with semiotic theory. By engaging with language games, poetry opens up the possibility to deny the position of being knowable, allowing one to disappear into inscrutability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163579</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan</title>
<link>https://hdl.handle.net/1721.1/163578</link>
<description>Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan
El Haq, Haidar
Throughout Indonesia’s colonial and postcolonial histories, the peatlands of Kalimantan have been not only politically contested spaces but also sites of ontological struggle. From transmigrasi programs to Suharto’s Mega-Rice Project and most notably today’s carbon offset regimes, peat has been transformed into a paradoxical ecology: degraded yet investible, conserved yet profitable. These transformations enclose land, force communities to choose between extraction or restoration, criminalize fire, and abandon regenerative forms of cultivation. These are histories of ontological occupation institutionalized: the marginalization of both peat’s inhabitants and the soil itself as world-making agents, shaped by speculative regimes of governance, rooted in planetary imaginaries of climate salvation and fantasies of productivity. This thesis proposes Koalisi Lahan–Gambut (Peat–Land Coalition), a speculative parainstitution that explores how coalitional spatial practices might reclaim inhabitation in peat ecologies. Situated in a Ngaju village within the buffer zone of one of the world’s largest carbon offset territories—between deep peat and riverine edges, between restoration enclosures and plantation areas—the coalition works through the murkiness of peat, the heterogeneity of its inhabitants, and the crowded terrain of overlapping institutional claims. It foregrounds the frictions between gambut (peat) and lahan (land). Structured across three inquiries, the document presents a Living Glossary that assembles field terms and relational epistemologies drawn from Kalimantan’s peatlands; a genealogy of Governance, Carbon Fix, and Buffer Zone that traces the historical and institutional processes that rendered peatlands governable; and Landing in the Buffer Zone, which turns to the coalition’s situated experiments in becoming-with, inhabiting, and reclaiming the space between peat and land.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163578</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering TEV Protease Specificity: An Exploration of Machine Learning and High-Throughput Experimentation for Protein Design</title>
<link>https://hdl.handle.net/1721.1/163577</link>
<description>Engineering TEV Protease Specificity: An Exploration of Machine Learning and High-Throughput Experimentation for Protein Design
Sundar, Vikram
Engineering sequence-specific proteases would enable a wide variety of therapeutic applications in diseases ranging from cancer to Parkinson’s disease. However, many previous experimental and physics-based attempts at protease engineering have failed to engineer specificity in cleaving alternative substrates, rendering them useless. In this thesis, we aim to engineer TEV (tobacco etch virus) protease, a highly sequence-specific protease, to cleave alternative substrates. We incorporate novel high-throughput assays and powerful machine learning (ML) methods for highly effective protein engineering. The first portion of this thesis focuses on generating fitness landscapes from high-throughput experiments. Most machine learning models do not account for experimental noise, harming model performance and changing model rankings in benchmarking studies. Here we develop FLIGHTED, a Bayesian method of accounting for uncertainty by generating probabilistic fitness landscapes from noisy high-throughput experiments. We demonstrate how FLIGHTED can improve model performance on two categories of experiments: single-step selection assays, such as phage display, and a novel high-throughput assay called DHARMA that ties activity to base editing. FLIGHTED can be used to generate robust, well-calibrated fitness landscapes, and when combined with DHARMA, our methods enable us to generate fitness landscapes of millions of variants. We then evaluate how to model protein fitness given a fitness dataset of millions of variants. Accounting for noise via FLIGHTED significantly improves model performance, especially of high-performing models. Data size, not model scale, is the most important factor in improving model performance. Furthermore, the choice of top model architecture matters more than the protein language model embedding. The best way to generate sufficient data scale is via error-prone PCR libraries; models trained on these landscapes achieve high accuracy. Using these methods, we successfully engineer both activity on an alternative substrate and specificity when compared to the wild-type. The ML-designed variants outperform anything found in the training set, demonstrating the value of machine learning even with experimental libraries of millions of variants. However, our results are limited to relatively close substrates. How best to improve model performance on distant substrates remains an open question.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163577</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine</title>
<link>https://hdl.handle.net/1721.1/163576</link>
<description>An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine
Tamburro, Alexandra
Reducing lubricating oil consumption (LOC) in reciprocating engines is an increasingly important objective in the pursuit of lower greenhouse gas emissions, longer maintenance intervals, and compliance with tightening environmental regulations. In 2022, the U.S. transportation sector alone was responsible for 29% of national greenhouse gas emissions, 87% of which originated from systems powered by reciprocating engines [1]. While significant progress has been made in fuel efficiency, oil consumption remains as a key contributor to carbon emissions. This research investigates the impact of design parameters in three-piece oil control rings (TPOCRs) and liner surface finish on oil consumption behavior.&#13;
&#13;
Utilizing a hydrogen-fueled engine—where the only source of CO₂ emissions is from consumed lubricating oil—this study develops a high-fidelity, FTIR-based method for direct LOC measurement. A derivation of oil consumption based on air and fuel mass flow rates and measured CO₂ emissions is presented, alongside a sensitivity analysis which identified FTIR measurement uncertainty and ambient CO₂ variation as dominant error sources. All experiments were conducted at 2000 RPM under medium load (4 bar IMEP). The experimental results showed that under the tested condition, 1) increasing liner roughness increases the LOC and 2) changing the orientation of any rails with asymmetrical profile to favor up-scraping results in an elevation of LOC.  Analyses applying liner vaporization and TPOCR models showed that the changes in liner oil film thickness brought by the TPOCR changes have negligible effect on the LOC from the oil evaporation.  Increases in upper-rail up-scraping ability and the oil accumulation inside the TPOCR groove can both elevate the LOC although further investigation is needed to understand the oil transport paths leading to the LOC.&#13;
&#13;
This work provides a foundation for future optimization of TPOCR design by highlighting key ring-liner interactions and oil transport mechanisms. Further study of asymmetric geometries and surface characteristics will provide further insights for reducing oil consumption in engine platforms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163576</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape</title>
<link>https://hdl.handle.net/1721.1/163575</link>
<description>Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape
Bhupathi, Hari Raghavendran
In 2021, the United States committed to achieving net-zero greenhouse gas emissions by 2050, requiring a fundamental transformation of its energy infrastructure. This thesis develops a nationwide optimization model to minimize capital expenditures and understand the trade-off between renewable capacity, storage, and transmission networks. The results show that the least-cost configuration, achieved when nuclear and battery capital costs fall by 50%, requires approximately $3.25 trillion in new investment - a 37% reduction relative to the baseline scenario. Comparative scenario analysis reveals a marked shift toward centralized storage when nuclear costs decline, which improves reliability and reduces contingency requirements - mirroring inventory pooling dynamics in supply chains. Concurrently, wind capacity additions fall sharply, with each 10% reduction in nuclear cost halving the predicted wind capacity addition. Transmission infrastructure evolves accordingly: 765 kV lines decline as nuclear becomes more decentralized, while 230 kV lines expand modestly to manage increased intermittency. By&#13;
quantifying trade-offs across technologies and identifying system tipping points, this work offers a framework for policymakers and long-horizon investors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163575</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation</title>
<link>https://hdl.handle.net/1721.1/163574</link>
<description>Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation
Shah, Sharmi
Reliable tactile feedback is essential for robotic systems to interact effectively with their environments, especially in dynamic manipulation tasks where detecting contact onset, direction, and force is critical for control and planning. This thesis advances the development of barometer-based tactile sensors for low-force interactions, building upon prior work from the Biomimetic Robotics Lab. Previous work demonstrated that neural networks could infer contact location and three-axis contact force from barometers embedded within an elastomer. However, these models did not account for the viscoelastic behavior of the elastomer, which degrades sensor repeatability and bandwidth. To address these limitations, this thesis introduces a recurrent neural network (RNN) architecture that captures viscoelastic transients in the sensor response. The proposed methods are evaluated on two sensor geometries: a spherical sensor and a slimmer ellipsoid variant. An automated data collection pipeline is developed to generate temporally-continuous, uniformly sampled datasets across the sensor surface. RNN models trained on this data show that temporal modeling improves force prediction accuracy across both designs. To improve angle prediction accuracy, a binning strategy is used to enforce a uniform prior over contact orientations. The resulting "Binned RNN" neural networks are small-scale and demonstrate high sensitivity, enabling responsive tactile feedback. The utility of these tactile sensors is demonstrated by integrating the sensors onto a dexterous two-finger gripper and performing light grasping and estimation of object reorientation using solely tactile measurements. This work shows that accounting for viscoelastic effects through informed sampling and temporal modeling enhances the practical performance of elastomer-based tactile sensors in robotic systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163574</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can diffusion models capture extreme event statistics?</title>
<link>https://hdl.handle.net/1721.1/163573</link>
<description>Can diffusion models capture extreme event statistics?
Stamatelopoulos, Stamatios
For many important problems it is essential to be able to accurately quantify the statistics of extremes for specific quantities of interest, such as extreme atmospheric weather events or ocean-related quantities. While there are many classical approaches to perform such modeling tasks, recent interest has been increasing in the usage of generative models trained on available data. Despite the sporadic success of such methods, it is not clear for what systems or datasets a system-agnostic generative AI tool is capable of generating previously ‘unseen’ extreme events in a manner that accurately extrapolates the tails for the observable of interest. Here, we propose an apriori criterion, which based on the geometry of the training dataset, it can predict whether a generative AI tool will be able to extrapolate the tails, i.e. generate previously unseen extreme events. The idea is to quantify whether existing extreme events lie in the interior of the dataset or its boundary. In the former case it is shown that generative AI tools can work in an ‘interpolation’ mode and generate new extreme events. On the other hand, if the topology of the dataset is such that extremes live in the boundary of the domain then the generative AI algorithm needs to operate in an extrapolation mode which does not lead to accurate results. We illustrate our findings on a specific class of Diffusion Models (DMs) called Denoising Diffusion Probabilistic Models (DDPMs) and we test on three datasets, a simple on-hyperball dataset following a Weibull distribution for the radii of the data points of dimensionality 2 • 10³, a dataset sampled from the so-called Majda-McLaughlin-Tabak Wave Model (MMT), of dimensionality 8.1 • 10³ and a dataset consisting of Lagrangian turbulence trajectories, of dimensionality 2 • 10³.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163573</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings</title>
<link>https://hdl.handle.net/1721.1/163572</link>
<description>Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings
Ajienka, Soala Lolia
This thesis proposes the weaving together of two lost traditions - the practice of primary glassmaking in southern Nigeria and the U-shaped bungalow typology of multi-family housing - as a means to address both the qualitative and quantitative housing deficits in Port Harcourt and to support the broader requisites of macroeconomic productivity in Nigeria. The thesis frames the argument that the materiality and application of glass can reconnect the inhabitation and construction of Face Me, I Face You (FMIFY) housing to Nigerian history, culture, and identity. By charting a blueprint for localized material production and engaging questions of affordability, cost structure, and financing, this work positions design as a technical solution and an act of cultural authorship. As an architect, builder, and member of the community, I advocate for a new practice in which the bond between local craftsmanship and housing development is re-established - through material choices, construction systems, economic benchmarking and spatial design strategies. This body of work braids together three interconnected narratives: First, it traces the historical evolution of the U shaped bungalow typology, revealing its roots as a colonial adaptation of the rural compound house, the economic conditions that have led to its physical obsolescence yet sustained market relevance and examining how its cultural significance was gradually diluted through climate-insensitive design and the introduction of imported materials. Second, this body of work rediscovers Nigeria’s precolonial glassmaking traditions, with a focus on artisanal production methods that offer environmental efficiency, energy intelligence, and deep cultural resonance - qualities in stark contrast to the high-energy, standardized imported glass that dominates today’s housing. Third, it integrates these two recoveries through built interventions: redesigning roof structures to support artisanal glass rondels, optimizing daylighting, ventilation, and thermal comfort, and reorganizing courtyards to revive their role as culturally vibrant, socially essential spaces. By leveraging indigenous glassmaking practices and small-batch production models, this thesis advocates for the creation of a circular economy, generating local employment, reducing embodied energy, and restoring cultural resilience - while delivering environmentally sensitive and economically viable housing solutions that demonstrate comparable return on costs for their owners. Foregrounding opacity as a design value, the project seeks to balance communal life with cultural and spatial notions of privacy, challenging the hegemony of imported transparency. Through the strategic curation of apertures, the careful modulation of light and shadow, and the integration of locally crafted glass rondels, the thesis re-envisions the Face Me I Face You typology. Ultimately, this work positions artisanal glass not only as a building material, but as a medium for recalibrating housing production in southern Nigeria toward systemic resilience and self-determination.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163572</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation</title>
<link>https://hdl.handle.net/1721.1/163571</link>
<description>DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation
Ulloa, Gabriella E.
DexWrist is a compliant robotic wrist designed to advance robotic manipulation in highly-constrained environments, enable dynamic tasks, and speed up data collection. DexWrist is designed to be close to the functional capabilities of the human wrist and achieves mechanical compliance and a greater workspace as compared to existing robotic wrist designs. The DexWrist can supercharge policy learning by (i) enabling faster teleoperation and therefore making data collection more scalable; (ii) completing tasks in fewer steps which reduces trajectory lengths and therefore can ease policy learning; (iii) DexWrist is designed to be torque transparent with easily simulateable kinematics for simulated data collection; and most importantly (iv) expands the workspace of manipulation for approaching highly cluttered scenes and tasks. More details about the wrist can be found at: https://sites.google.com/view/dexwrist/home.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163571</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guiding Labor: Sensable Instructions through Digital Jigs</title>
<link>https://hdl.handle.net/1721.1/163570</link>
<description>Guiding Labor: Sensable Instructions through Digital Jigs
Griffin, Danny
Contemporary architects find themselves at a juncture, navigating the transition from traditional modes of instruction to an asymmetrical integration of digital technologies. Drawings remain central to architectural practice, yet a widening gap persists between tools for making drawings and tools for interpreting them. Since Alberti’s division between intellectual and productive labor, architectural instructions have been generated in remote offices and executed on distant construction sites. Digital tools have expanded the information density of drawings, yet the process of interpretation remains predominantly analog. Graphical conventions, though precise, are abstract, and so paper instructions alone lack spatial meaning. Builders ultimately rely on the aid of analog locating techniques to translate these abstractions into actions. Tools as simple as strings and squares have long been present on construction sites, enabling this translation. Over time, the shape and function of such devices has evolved in response to different pressures of location, from the Gothic template which left room for the builder to improvise, to the industrial jig that constrained movement to ensure replicability. The limitations of analog locating became clear when the plumb bob, long trusted to mark which direction was vertical, proved inadequate for navigating trajectories of flying objects. The solution was to embed physical devices with memory, marking a transition from tools which measure where they are to those that know where they are going. This shift from stateless to stateful devices gradually entered construction sites, and though we might distrust the devices that make possible the steering of missiles, this paradigm shift offers a productive challenge to the field of architecture. If simplifying complex construction is worthwhile, then communication pathways which more faithfully transfer information from digital model to physical destination must be explored. Central to this transformation are the tools which anchor instructions on site: interfaces already mediating between architect and builder, which must now evolve to interpret digital signals from afar. Digital jigs will be the conduits of paperless instruction on physical sites, enabling what this thesis terms sensable instructions: instructions receivable by both machines and humans.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163570</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface</title>
<link>https://hdl.handle.net/1721.1/163569</link>
<description>Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface
Bei, Yining
Designers often rely on keyboard and mouse for 3D modeling, a method that can feel unintuitive or restrictive—especially in collaborative or spatially immersive settings. This thesis explores how multimodal interaction, specifically the combination of hand gestures and voice commands, can support more natural, efficient, and accessible 3D modeling in virtual reality (VR). Built on a custom Unity-based system integrating Meta Quest hand tracking and Wit.ai voice recognition, the study investigates how these two input modes—gesture and speech—can be used together to manipulate and modify 3D geometry in real time. The research proceeds in three phases: (1) a formative study analyzing how users intuitively deploy gestures, revealing common preferences, task breakdown strategies, and limitations in gesture inputs; (2) system design and implementation of both gesture-only and gesture + speech interfaces for navigation and object manipulation (e.g., translation, scaling, duplication); and (3) a comparative user study evaluating gesture-only, gesture + speech, and keyboard + mouse workflows in terms of learning curve, task efficiency, and user satisfaction. Results show that gesture + speech enables smoother transitions across modeling subtasks and allows users to offload certain parameters (e.g., numeric values, distances) to voice while using gestures for spatial control. Participants reported higher engagement and lower cognitive load compared to keyboard-based workflows, especially in tasks involving spatial scale and collaboration. This thesis demonstrates the feasibility and design potential of multimodal interaction for immersive modeling workflows and offers insights for future XR design tools that seek to blend precision with embodied interaction.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163569</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Guided Optimization for Intelligent Mobility Systems</title>
<link>https://hdl.handle.net/1721.1/163568</link>
<description>Learning-Guided Optimization for Intelligent Mobility Systems
Li, Sirui
Efficient and reliable mobility systems are essential to modern-day society, with broad impacts ranging from day-to-day commuting, public transportation, emergency response to last-mile package delivery and freight logistics. Autonomous vehicles have the potential to improve mobility efficiency and convenience but also raise questions about reliability and feasibility of deployment. The first contribution of this thesis is a set of novel, principled control-theoretical analyses that provide strong stability and reliability guarantees for autonomous vehicles and human-compatible driving, and they further covers emergent traffic behaviors in mixed-autonomy systems. While these theoretical guarantees offer valuable insights, mobility systems are inherently complex, and their overall performance often relies on solving difficult optimization problems, many of which are combinatorial, thus presenting significant scalability challenges. Overcoming these challenges requires innovative approaches that extend beyond traditional control techniques. This thesis further contributes a set of machine learning-guided optimization algorithms that significantly enhance the efficiency and scalability of solving combinatorial optimization problems. These algorithms have proven effective across a wide range of mobility-related applications. Compared to state-of-the-art solvers, they achieve 10× to 100× speed-up in large-scale vehicle routing problems, 35% to 70% solve-time improvement in various mixed-integer linear programming problems, and up to 54% acceleration in long-horizon scheduling problems. These advancements open new possibilities for efficient decision-making in large-scale transportation systems, enabling smarter, faster, and more adaptive mobility solutions. Combining learning, optimization, and control, this thesis demonstrates the potential of learning-guided optimization and principled control-theoretical analysis to address the increasing complexity of modern mobility systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163568</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Top-down and bottom-up interactions for cortical bursting</title>
<link>https://hdl.handle.net/1721.1/163567</link>
<description>Top-down and bottom-up interactions for cortical bursting
Tang, Vincent D.
High-frequency burst firing occurs throughout the mammalian cortex in vivo, yet both the underlying mechanisms and functional roles of bursts are unclear. Burst firing in brain slices is strongly modulated by the activity of apical dendrites, which branch extensively in layer 1 (L1) and receive long-range inputs from higher-order cortical and thalamic areas. These properties suggest a powerful subcellular substrate by which single pyramidal neurons could multiplex bottom-up and top-down information via L1-independent tonic spikes and L1-dependent bursts, respectively, and have provided a basis for emerging theoretical models of cortical computation and learning. However, our understanding of burst firing and subcellular processing remains critically limited by a lack of evidence in awake animals. It is unclear whether burst firing a) is preferentially recruited by bottom-up versus top-down inputs, and b) requires apical dendritic engagement. To answer these questions, we performed high-density extracellular recordings in primary visual cortex of awake mice while presenting a battery of Gabor (bottom-up) and inverse (top-down) visual stimuli. We report widespread high-frequency bursts in L2/3 and L5 pyramidal neurons. Contrary to expectation, bursts exhibited extremely short response latencies, and were most strongly recruited by Gabor stimuli. We further tested the causal contribution(s) of apical dendrites to burst firing and top-down visual tuning via two optogenetic manipulations: direct L5 apical tuft inhibition and NDNF interneuron activation. Strikingly, L1 inhibition only modestly reduced the burst fraction, and did not differentially affect Gabor vs inverse responses. Taken together, these results challenge prevailing theories of apical dendritic involvement in burst spike generation and feedback visual tuning, and provide new biological constraints for future theoretical and experimental work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163567</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices</title>
<link>https://hdl.handle.net/1721.1/163566</link>
<description>Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices
Stamler, Natasha Lia
Access to clean water is a serious challenge around the world, with almost 2/3 of the global population experiencing water scarcity at some point during the year, especially in dry regions. One solution to this problem is sorbent-based atmospheric water harvesting (SAWH) due to its ability to produce drinking water in a range of environments, including at low humidity. SAWH device operation is composed of adsorption and desorption phases. During adsorption, moist air flows into the device and is adsorbed onto the sorbent bed. This is followed by the desorption phase during which the sorbent is heated to desorb the water as vapor, which is then transported to a colder condenser surface on which it is condensed as liquid water. Finally, the condensed water can be collected outside the device. However, current state-of-the-art SAWH devices are inefficient, with less than 70% of their adsorbed water being collected. This means the adsorbed water is either not condensed or condensed but not collected. This work discusses the impact of the coupling between desorption and condensation on the efficiency of SAWH devices. In general, SAWH systems can suffer from three scenarios of inefficient desorption-condensation: flux-limited, when the desorption rate in the device is insufficient to fully utilize the condenser’s condensation capacity; transport-limited, when the time scale of the vapor transport from the sorbent bed to the condenser is slow compared to the desorption operation time; and condenser-limited, when the condenser has a poor thermal design compared to the vapor flux. We developed a system-level model of a SAWH device to inform design strategies to mitigate these three bottlenecks and optimize device performance. Additionally, we quantified hydrocarbons, common airborne contaminants, as a mechanism for slowing water collection. Experimental findings are used to develop a model for the impact of airborne hydrocarbon adsorption on surface wettability and water retention for six metals commonly used as condenser materials. The findings from these models can inform design recommendations for SAWH devices as well as various other industrial applications in which water condenses on metal surfaces such as refrigeration and power generation. Future work will focus on continued experimental validation of the models.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163566</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior</title>
<link>https://hdl.handle.net/1721.1/163565</link>
<description>Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior
Rodriguez, Camille Dyani
Vimentin, a type III intermediate filament, is an understudied component of the cytoskeletal system. However, in recent studies we can see its structural and mechanical properties aid in a cell's survival and migration. It forms a hyperelastic network and works synergistically with actin and microtubule to protect against large deformations.  Despite vimentin intermediate filaments critical role in many biological processes, there are limited studies on its role in collective migration in 3D in vitro. To elucidate vimentin’s role in a collective cell cluster, single MCF-7 cells are embedded in a Matrigel-Alginate gel, which then grow into multicellular systems. The MCF-7 cells utilized are vimentin null, chemically inducible to form vimentin networks that interact with the other components of the cytoskeleton. These MCF-7 allow for controlled expression of mature vimentin intermediate filament (VIFs) which then form networks. We study these multicellular clusters over the course of 14 days. We demonstrate that there are key differences in morphology and mechanics, with the presence of vimentin. Our results suggests VIFs create more irregular cell clusters with more visible dynamic interplay with the environment. Uninduced (no VIFs) clusters were overall less dynamic and exhibited spherical morphology and minimal protrusions. Cluster with mature VIFs tended to form more elongated multicellular clusters with increased number of projections into the surrounding gel. In these induced (with VIFs) clusters these projections are shown to be constantly protruding and retracting along with the nuclei continually reorganizing. Our results show that these projections are accompanied with increased protrusive and contractile gel displacements. These results indicate that vimentin network generate an dynamic and functional morphology, along with mechanically perturbing their environment in the early stages of cell cluster collective behavior.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163565</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning for Dynamic Nonprehensile Object Transport</title>
<link>https://hdl.handle.net/1721.1/163564</link>
<description>Planning for Dynamic Nonprehensile Object Transport
Wang, Eric K.
Generalized planning methods for dynamic manipulation struggle to efficiently solve kinodynamic constraints. Gradient-based methods suffer from initialization sensitivity, local optimum convergence, and lack of feasibility guarantees, while sampling-based methods can require large computation times if there exist challenging boundary conditions. Iterative Time Optimal Path Parameterization, or iTOPP, guarantees a feasible local minimum for a dynamic grasping problem by iteratively decreasing transit time for a trajectory initially generated to satisfy kinodynamic contact constraints. We demonstrate solutions that can handle initial or final goal states defined as quasistatically infeasible, in which purely quasistatic motions cannot generate a warm start trajectory. We also design an indirect adaptive controller that can track a desired dynamic grasping trajectory assuming unknown object mass and location parameters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163564</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites</title>
<link>https://hdl.handle.net/1721.1/163563</link>
<description>Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites
Webb, Alisa Nicole
Throughout the aerospace industry, carbon fiber reinforced polymer (CFRP) laminated composites are used extensively in spacecraft and aircraft vehicles due to their high specific strength and stiffness and other properties. Processing these advanced structural CFRP composites, especially in prepreg form, is often completed via autoclaves where elevated temperatures and pressures of typically 180 ◦C (350 ◦F) and 0.7 MPa (7 bars), respectively, are applied to cure the polymer matrix and compress the constituent laminae together. However, autoclaves are energy intensive, expensive, and impose geometrical constraints on components due to thermal gradients within the chamber. Thus, there exists a need to find alternative manufacturing techniques. Throughout this thesis, an alternative method to autoclave processing is presented using vacuum-bag only (VBO) techniques with nanoporous networks (NPNs) in the interlaminar regions in autoclave-required epoxy prepreg CFRP composites. Nanoporous materials are defined as materials containing pores in the mid nanometer to low micrometer range. Once placed in the interlaminar region of the laminate, voids are reduced by the induced capillary pressures of the NPNs, and trapped gas evacuates through the NPN. By utilizing capillary flow porometry, capillary pressure and through-thickness permeability are quantified for various NPNs, along with other porous materials. Capillary pressure and permeability exhibit an inversely proportional relationship for all tested materials with CNT-based and polymer aerogel NPNs providing capillary pressures higher than an autoclave pressure of 0.7 MPa. Accordingly, an Ashby-type plot is presented as an aid for NPN selection for composites manufacturing. Previous studies of unidirectional glass fiber reinforced polymer (GFRP) composites and unidirectional CFRP composites show success with NPN-enabled VBO-manufacturing using aligned carbon nanotubes (A-CNTs) and electrospun polymer nanofiber (EPN) mats. However, success with woven prepreg has not been consistently achieved before this thesis. Autoclave woven epoxy CFRP laminates of IM7/8552 are manufactured using EPN and polymer aerogel NPNs with a VBO procedure. Once manufactured, these laminates were characterized for quality through void content analysis. 0.11 void vol% was achieved which is well within the 1 vol% of void requirement for aerospace-grade composite components. To aid the in the understanding of NPNs, in situ experiments utilizing microcomputed tomography are developed to investigate the (presumed Newtonian) flow of resin throughout the NPN as a function of temperature, which varies throughout a typical manufacturer recommended cure cycle (MRCC), along with the void evolution throughout the cure cycle. Based on this new in situ understanding, a manufacturing process modification is devised to produce void-free woven laminates at the 152.4 mm laminate scale. Through manufacturing, material characterization, and designed in situ experiments, this thesis demonstrates the use of NPNs for VBO-manufacturing of low-void content aerospace-grade CFRP composites to replace autoclaves for energy and cost savings.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163563</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making</title>
<link>https://hdl.handle.net/1721.1/163562</link>
<description>Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making
Gao, Jin
Cities are dynamic and evolving organisms shaped through the check-and-balance of interest exchange. As cities gain complexity and more stakeholders become involved in decision-making, reaching consensus becomes the core challenge and the essence of the urbanism process. This thesis introduces a computational framework for AI-augmented collective decision-making in urban settings. Based on real-world case studies, the core decision-making process is abstracted as a multiplayer board game modeling the check-and-balance dynamics among stakeholders with differing values. Players are encouraged to balance short-term interests and long-term resilience, and evaluate the risks and benefits of collaboration. The system is implemented as a physical interactive play-table with digital interfaces, enabling two use cases: simulating potential outcomes via AI self-play, and human–agent co-play via human-inthe-loop interactions. Technically, the framework integrates multi-agent reinforcement learning (MARL) for agent strategy training, multi-agent large language model (LLM) discussions to enable natural language negotiation, and retrieval-augmented generation (RAG) to ground decisions in contextual knowledge. Together, these components form a full-stack pipeline for simulating collective decision-making enriched by human participation. This research offers a novel participatory tool for planners, policymakers, architects, and the public to examine how differing values shape development trajectories. It also demonstrates an integrated approach to collective intelligence, combining numerical optimization, language-based reasoning, and human participation, to explore how AI–AI and AI–human collaboration can emerge within complex multi-stakeholder environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163562</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creating space for HVAC systems: A new, intuition-building approach to HVAC system integration in architectural education and practice</title>
<link>https://hdl.handle.net/1721.1/163561</link>
<description>Creating space for HVAC systems: A new, intuition-building approach to HVAC system integration in architectural education and practice
Irani, Ali
Heating, Ventilation, and Air Conditioning (HVAC) systems are vital to ensuring a healthy indoor environment in buildings. They are essential to the global shift toward a decarbonized, all-electric future. While integrated design practice has promised cost, energy, and space savings due to earlier and more frequent collaboration between design disciplines, remaining missed opportunities in the HVAC system design and coordination process often lead to spatial conflicts, performance tradeoffs, and uncomfortable spaces. This dissertation aims to understand current coordination practices to identify the root causes of existing problems, timeline issues, and knowledge gaps. Then, it proposes a series of enhancements to address these shortcomings, focusing on National Architectural Accrediting Board (NAAB) accredited architectural education programs that train the next generation of practicing architects. The proposed research hypotheses are validated in a three-part research approach: (1) releasing architecture industry surveys and conducting interviews, (2) designing and testing an early-stage design tool, and (3) developing, implementing, and evaluating a comprehensive HVAC curriculum for architecture students. The dissertation demonstrates that with the right tools and educational resources, architecture students can make informed, intuition-based HVAC system selections and integrate them into their building design, with students who studied the comprehensive curriculum demonstrating a 13% improvement in understanding and application of HVAC concepts compared to a control group of students. This work helps bridge the knowledge gap regarding HVAC systems, empowering designers to coordinate more effectively and prioritizing the role of HVAC systems in building performance simulation education.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163561</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World</title>
<link>https://hdl.handle.net/1721.1/163560</link>
<description>From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World
Apostolopoulou, Katerina
With over 86,000 kilometers of crude oil pipelines—and more than 2.13 million kilometers of total oil and gas pipelines in the United States as of 2024—many segments are already corroded and aging, deeply embedded within urban and ecological systems that are increasingly endangered. As the global energy transition accelerates, this thesis investigates the future of these infrastructures, reconsidering the vast network of decommissioned and declining legacy pipelines not as obsolete relics, but as latent spatial assets for ecological repair, climate resilience, and socio-environmental justice. Moving beyond narratives of extraction and decay, the project repositions pipelines as linear territories of opportunity—capable of being retrofitted into new civic, ecological, and infrastructural frameworks. Central to the project is the transformation of the pipeline’s linear, extractive logic into a circular and connective one: a loop that is both finite and infinite, territorial and experiential. Focusing on a strategically selected loop of crude oil pipelines spanning 14 states, the thesis constructs a cartographic and architectural framework to reimagine these lines as sites of ecological repair, social infrastructure, and alternative energy distribution—where design, much like a biological scaffold, acts as a catalyst for regeneration along landscapes shaped by extraction. Through spatial analysis, typological classification, and mapping, five territorial conditions are defined along the pipeline loop, each offering distinct opportunities for intervention. These are tested through speculative design prototypes that transform the pipeline through operations of repurpose, renewable energy distribution, or ecological remediation. The interventions reframe invasive infrastructures into public and environmental assets—generating new spaces for inhabitation, production, and collective memory. Ultimately, the thesis proposes a post-carbon design paradigm rooted in ecological reciprocity, collective agency, and infrastructural care—revealing hidden energy landscapes and inscribing them with new values: resilience, equity, and repair.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163560</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces</title>
<link>https://hdl.handle.net/1721.1/163559</link>
<description>Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces
Salmon, Jason
The automobile industry is critical to modern society. Simultaneously, the constant release of toxic emissions such as greenhouse gases into the atmosphere is detrimental to health and the environment. Vehicles which exploit cleaner energy sources would be preferable to reduce the horrific scale of human-initiated damage such as climate change. However, solar road vehicles—though designed and fabricated by some—have not reached a sufficient level to be production-worthy. The low efficiency of solar cells and the high energy demands of the average land vehicle are irreconcilable for most manufacturers using industry methods and design precedent. Therefore, this work centres around the design and control of a solar road vehicle which fundamentally breaks from the mould of the typical road vehicle design—a vehicle which employs extensive articulated surfaces (dubbed "solar wings") which can be angled to directly face the sun, thereby maximising solar irradiation. A solar tracker using Bayesian inference achieving promising results in both convergence and accuracy is presented. Additionally, a systematic method for optimizing a solar road vehicle with solar wings is developed and documented.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163559</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of Multi-Object Working Memory and Motion Prediction in the Primate Brain</title>
<link>https://hdl.handle.net/1721.1/163558</link>
<description>Mechanisms of Multi-Object Working Memory and Motion Prediction in the Primate Brain
Watters, Nick
Sample-efficient learning and flexible generalization are hallmarks of intelligent behavior. Both sample-efficient learning and flexible generalization rely on re-using a mental model of the world in new contexts. For many decades, researchers in cognitive science, neuroscience, and machine learning have studied competing theories about the structure of our mental model of the world. One set of theories concerns the structure of multi-object representations in the brain. Some studies claim the brain represents multiple objects by allocating them to disjoint “slots” in working memory, others claim that the brain flexibly distributes a common pool of resources across objects, and yet others claim the brain represents multiple objects by rapidly switching between them through time. Another set of theories concerns the nature of predicting object motion. Some claim that the mind has an internal model of physics in the world that it uses to simulate the motion of objects through time, whereas others claim the mind relies on priors and heuristics to predict object motion without explicit simulation. Both of these sets of competing theories are long-standing and unresolved. In this work, we tackle these two open questions using primate neurophysiology and computational modeling. We trained monkeys to perform multi-object memory and motion prediction tasks, recorded large-scale single-unit activity from frontal cortex brain areas, and rigorously compared different hypotheses for the neural mechanisms of multi-object working memory and motion prediction. In the case of multi-object working memory, we found that the neural activity we recorded is more consistent with a model that flexibly distributes attentional resources across objects than with models that use object slots or temporal switching representations. In the case of motion prediction, we found that the neural activity is not consistent with the monkeys simulating an occluded moving object in real-time. Instead, the monkeys’ neural activity is driven largely by an anticipation of the position of the object at a future point in time. Both of these findings call into question long-standing cognitive theories and imply that the brain’s model of the world incorporates attentional mechanisms, priors, and heuristics. Lastly, we introduce a neural data preprocessing method for stabilizing electrophysiology recordings. This method improves spike-sorting results, helped us recover more neurons from our data, and we hope may help others make the most of their electrophysiology data as well.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163558</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/163557</link>
<description>Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy
Romero, Catalina
Raman spectroscopy is a powerful optical technique that enables rapid, label-free molecular analysis. This offers significant potential to be used across pharmaceutical development, microbiome research, and food diagnostics. However, the utility of Raman spectroscopy in high-throughput applications has been limited by the lack of cost-effective, modular automation platforms capable of handling large volumes of samples with precision and repeatability. Conventional Raman workflows are constrained by manual sample handling, slow throughput, and high user variability, limiting their applicability in high-volume testing environments. To address these challenges, this thesis presents the development and initial validation of a custom two-axis (XY) gantry and a robotic well plate stacker automation platform designed to streamline the sample handling workflow in Raman spectroscopy systems, facilitating high-throughput, precise, and reproducible positioning of microplate samples under a Raman microscope. This thesis also provides a commercialization framework for the system as a standalone automation product, targeting pharmaceutical high-throughput screening, microbiome analysis, and food safety testing. The platform serves the unmet needs in these industries, where labor-intensive and inconsistent sample positioning limits scalability. The commercialization analysis includes an evaluation of market sizing, competitive benchmarking, pricing models, and go-to-market strategies. The modular platform has the potential to enable broader adoption of Raman-based analysis tools by reducing labor intensity and improving repeatability in sample positioning workflows. This work lays the foundation for the future integration of optical feedback and automated analysis, with the goal of transforming how Raman-based diagnostics are conducted at scale.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163557</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dishing It Out: Reimagining Multicultural College Dining Through Student-Centered Design</title>
<link>https://hdl.handle.net/1721.1/163556</link>
<description>Dishing It Out: Reimagining Multicultural College Dining Through Student-Centered Design
Dong, Annie
Dining halls are central spaces in colleges, fostering not only nourishment but also cultural connection and community. However, when dining centers fall short in catering to the needs of their multicultural student body, students are often left feeling isolated and even further from home. Using MIT as a case study, this thesis employs user research and digital storytelling to explore how collecting student perspectives can inform college dining centers on better supporting the diverse cultural backgrounds and dietary needs of their students. The research and findings highlight the critical gaps and strengths in cultural representation within MIT’s dining halls. Through surveys and user research, this thesis gathers student perspectives on food authenticity, comfort, and identity, which inform the design of an interactive website prototype exploring student culinary backgrounds and preferences. This project serves as both a resource for dining services and a digital cookbook curated by the student body. By centering student voices through a culinary lens, this project aims to reimagine dining spaces as inclusive, representative, and comforting shared spaces within college campuses.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163556</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors</title>
<link>https://hdl.handle.net/1721.1/163555</link>
<description>Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors
Spino III, Pascal
This thesis investigates how intelligent robot behavior can emerge from physical interactions rather than sensing, computation, and actuation in the traditional sense. Two robotic systems are presented to explore this concept in different domains. The first is a swarm of simple rolling robots whose collective morphology is shaped by distributed control laws and magnetic interactions, enabling decentralized construction-like behaviors such as bridge formation. The second is a soft underwater robot inspired by anguilliform swimming, which achieves efficient locomotion through a single actuator that leverages fluid–structure interactions in a compliant silicone tail. Useful behavior arises in both systems from the physical design and the dynamics of environmental interaction, rather than from algorithmic or computational complexity. These results demonstrate that physical intelligence can serve as a powerful design principle for building adaptive, robust, and minimal robotic systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163555</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Revenue Management to Satellite Communications</title>
<link>https://hdl.handle.net/1721.1/163554</link>
<description>Application of Revenue Management to Satellite Communications
Eiskowitz, Skylar
As the demand for satellite Internet continues to grow, satellite communication (SatCom) operators are faced with the challenge of effectively managing their capacity sales. While Revenue Management (RM) techniques have been widely used in other industries such as airline, hotel, and car rental services, the application of these methods in the context of SatCom is still scarce. This Thesis aims to bridge this gap by developing RM concepts, techniques, and optimization algorithms specifically tailored to the unique operational characteristics of SatCom capacity management and sales. The proposed SatCom RM method guides operators with quantitative recommendations of the amount of capacity to sell to different products in time and in different regions to maximize revenues.&#13;
&#13;
 Though SatCom has characteristics that favor the use of RM concepts (perishable inventory, fixed capacity with a low variable cost, the possibility to segment demand), there are unique structural characteristics that complicate the development of SatCom RM models. The primary challenge is that different products consume varying amounts of capacity, with larger terminal size products utilizing less power on a satellite than smaller terminal size products. Moreover, the selling practices in SatCom are complex because products may be sold in one period and consumed across multiple periods in which additional sales are made. This requires rolling both the selling and consumption periods. Lastly, the SatCom RM problem poses a multidimensional network problem, as products can consume bundles of resources in both space and time. &#13;
&#13;
We extend two commonly used airline RM algorithms, Expected Marginal Seat Revenue (EMSRb) heuristic and Displacement Adjusted Virtual Nesting (DAVN) to the SatCom problem to create booking limits. The booking limits recommend a threshold amount of capacity an operator should sell of each product. The contribution of this Thesis is the modification of established airline RM algorithms to handle products with variable capacity uptakes. Further, these algorithms typically account for displacement costs of products, but only in one dimension of space or time (e.g., selling an airline flight that uses multiple spatial legs may displace capacity away from flights that only use one leg). Our modifications allow for the consideration of displacement costs in both dimensions of space and time.&#13;
 &#13;
In order to evaluate the effectiveness of our inventory control approach, we conduct simulations of various demand scenarios and compare the revenue gains to a baseline scenario with no controls, as well as a simpler method that does not consider product duration. In a large-scale simulation spanning three years and encompassing thousands of product requests, we observe revenue gains ranging from 15%-30% depending on the demand scenario. Then, we extend the model to multiple zones and achieve 2%-10% revenue improvement using our Multi-Zone DAVN method compared to the DAVN method applied to each zone separately.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163554</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hidden Monuments</title>
<link>https://hdl.handle.net/1721.1/163553</link>
<description>Hidden Monuments
Lee, Sesil
Jeju Island’s burial culture is embedded in the island’s distinct landscape, where sandam burial mounds are not isolated monuments but quietly coexist with fields, ranches, and forests. These sites are living records of intangible heritage—ancestral beliefs, Beolcho rituals, and vernacular stone-stacking practices—manifested not through formalized memory, but through their modest yet persistent presence in the landscape. Today, however, these spaces are under threat: policies favoring cremation, rapid urbanization, and shifting land values render them increasingly invisible or obsolete. In the past few decades, two-thirds of sandam have been displaced, and with fewer than six out of over 100,000 burial sites designated as cultural heritage, traditional models of conservation are inadequate—unable to engage with the dispersed, landscape-bound nature of these burial grounds. This project reimagines Jeju’s burial mounds not as relics to be preserved, but as spatial anchors for cultural and communal expressions. Through a series of small-scale architectural interventions—gates, stages, passages, and shelters—deployed along paths tracing sandam clusters, the work explores how memory can be practiced rather than displayed. By offering ways to engage with the buried, the forgotten, and the living simultaneously, the project expands the idea of heritage: not as a static record, but as a participatory and evolving relationship between people, land, and memory.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163553</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-invasive tuning of experience-dependent plasticity in the primary visual cortex</title>
<link>https://hdl.handle.net/1721.1/163552</link>
<description>Non-invasive tuning of experience-dependent plasticity in the primary visual cortex
Reilly-Andújar, Francis
The cerebral cortex exhibits a remarkable capacity for experience-dependent plasticity, a feature that is predominantly confined to critical periods (CPs) during early postnatal development. In the mouse primary visual cortex (V1), ocular dominance plasticity (ODP) has served as a premier model for investigating the cellular and molecular mechanisms that underlie the formation and stabilization of cortical circuits. During the CP, short-term monocular deprivation (MD) induces both functional and anatomical changes in binocular V1, characterized by a weakening of deprived-eye responsiveness via mechanisms of synaptic long-term depression. As the critical period closes, increased inhibitory drive and the emergence of perineuronal nets (PNNs) stabilize neural circuits and restrict further experience-dependent plasticity. In Chapter 1, I review the key literature on ODP and provide a survey of interventions that have been shown to enhance ODP in adulthood. In Chapter 2, I present our findings that repeated anesthetic ketamine treatment can reinstate ‘juvenile-like’ plasticity in the adult mouse V1. Importantly, I demonstrate that this effect relies on the microglia-mediated depletion of PNNs, and that interfering with microglial purinergic P2Y12 receptor activation blocks the ketamine-induced enhancement of ODP. Building on these insights, Chapter 3 investigates the use of non-invasive light-flicker stimulation at different temporal frequencies as a means to unlock different forms of ODP in the adult mouse V1. Our results reveal that 60 Hz light-flicker stimulation reduces PNN levels and promotes a depression of deprived-eye responses following short-term MD, whereas 40 Hz stimulation – without altering PNN levels – enhances an adult form of ODP characterized by the strengthening of non-deprived eye responses following short-term MD. Furthermore, we show that in mice subjected to long-term MD initiated early in life, 40 Hz light-flicker treatment promotes recovery of visual function, as evidenced through physiological and behavioral assays. Finally, Chapter 4, outlines a series of future experiments designed to further elucidate the mechanisms by which light-flicker stimulation promotes enhanced ODP in adult V1. Together, the findings presented in this thesis introduce novel, minimally invasive (ketamine) and non-invasive (light-flicker) interventions that show promise as therapeutic strategies for ameliorating deficits arising from early life sensory deprivation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163552</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems</title>
<link>https://hdl.handle.net/1721.1/163551</link>
<description>Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems
Wucherer, Abigail
In the drive towards a globally decarbonized energy economy, rapid swap battery packs provide a potential means to improve electric vehicle adoption in high utilization industrial vehicles where lengthy charge times are a barrier to electrification. High voltage, high current battery connectors are a critical component for coupling the pack to the electric vehicle, distributing power from the battery to the drivetrain. Most state-of-the-art connections require precision alignment of contact surfaces, and bolted preload or retention mechanisms, hindering the implementation of rapid swap battery systems. The need for robust, high life cycle, high-power contacts motivates a new approach to connector design. The integration of electrical connectors with the battery mount’s structural loop creates a new design space where preload, geometry, and contact resistance may be optimized. This co-design approach enables mechanical and electrical functional requirements to be considered in conjunction to ensure reliable fulfillment in both areas while reducing the time for battery pack swaps. This work introduces two distinct approaches for aligning the pack to the vehicle, locking the battery in place, and engaging electrical contact with geometry unique to the system design. These approaches offer higher reliability, mechanical and electrical longevity, and automatic alignment capabilities during loading of the battery pack. Across both designs, the contact resistance is the primary metric for evaluating the electrical performance, and the contact pressure is used to evaluate the risk of mechanical wear. The first approach integrates a quasi-kinematic coupling-based connector with integrated electrical contacts, allowing for repeatable and accurate positioning of the battery pack to the vehicle. A slotted ball and socket design approach is considered to accommodate for angular misalignment and establish repeatable contact area through elastic averaging. The second approach proposes a planar contact to further reduce the contact pressure and increase contact longevity without the need for expensive and rare hardened coatings. This system relies on a rail and flat system for guiding the battery pack into a locked position and engaging the planar contacts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163551</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)</title>
<link>https://hdl.handle.net/1721.1/163550</link>
<description>Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)
Hakemy, Arezo
Early Afghan war rugs delineate place through their pictorial design, embedding spatial memory into the tactile surface of the woven field. Emerging in the wake of the Soviet invasion in the late 1970s, these rugs integrate modern war iconography of tanks, helicopters, and maps into a medium historically tied to regional identity, spiritual practice, and craft. While earlier scholarship has often read these rugs as commodities of war tourism, this thesis moves beyond this interpretation to foreground the rug as a placemaking device, one that asserts territory and agency through mapping techniques. Afghan war rugs frame and define space on a land that has largely been considered placeless, at times porous and seemingly unknown. Through their borders, these rugs resist the geopolitical narratives that have long reduced Afghanistan to a war-torn frontier. The border serves as a framing device, structuring the rug’s design while simultaneously asserting territorial presence. Whether following a prescribed cartoon or improvising patterns, the weaver actively engages in “border-ing,” exercising cartographic agency by embedding personal, traditional, and political motifs into the rug. This research interrogates how early Afghan war rugs engage in spatial representation against the backdrop of the Soviet-Afghan war from 1979-1989. From historical colonial mapping projects to Soviet and American cartographic investigations, Afghanistan’s borders have long been sites of surveillance, resource extraction, and imperial ambition. Yet, in contrast to these external mapping practices, the war rug’s design is a resistant act of placemaking. Examining the rug as both artifact and map, this study explores how Afghan weavers reclaim their landscapes through rug making, embedding memory and materiality into woven form.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163550</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of texture in auditory scene analysis</title>
<link>https://hdl.handle.net/1721.1/163549</link>
<description>The role of texture in auditory scene analysis
Hicks, Jarrod M.
Everyday auditory scenes contain sounds from many sources. For example, when crossing the street, you might hear sounds produced from the rumble of passing cars, the chatter of pedestrians, and the rapid tick of crosswalk signals. To make sense of this complex mixture of sounds, the auditory system must separate the mixture into coherent perceptual representations that are likely to correspond to the underlying sources in the world. This process is known as auditory scene analysis. Although a rich body of work has probed auditory scene analysis with simple synthetic stimuli and revealed principles of perceptual organization, the extent to which these principles apply to real-world scenes with natural sounds remains unclear. This thesis empirically examines auditory scene analysis with realistic sounds. In particular, we study the perception of scenes containing a common class of environmental sounds known as “textures”, investigating how the auditory system makes use of statistical structure to separate textures from other sources and how the underlying statistical representation both constrains and enables scene analysis. We first investigated the mechanisms of hearing in noise using real-world background “noise” textures. The results show that the auditory system estimates the properties of “noise” textures and stores them over time, using the resulting internal model to estimate other concurrent sounds. We then considered how concurrent sound texture sources are separated from each other. We found that auditory scene analysis with textures involves some principles identified in classical scene analysis work with simple sounds, but that these principles apply to the higher-order statistical representations that define natural textures. Together, the results reveal new aspects of auditory scene analysis with real-world sounds and clarify the role texture plays in everyday hearing. Our findings provide a bridge between the simple, synthetic stimuli studied historically and the rich complexity of real-world sounds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163549</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Driving Temporally Precise Learning in Individual Premotor Neurons using Closed-Loop Neurofeedback</title>
<link>https://hdl.handle.net/1721.1/163548</link>
<description>Driving Temporally Precise Learning in Individual Premotor Neurons using Closed-Loop Neurofeedback
Scherrer, Josefa R.
Much of human existence is based on our ability to learn complex sequences of motor movements. Speech, writing, and tool use all require activating a series of different muscles in a precisely timed pattern, and these patterns are learned through a long process of trial and error. How does the neural circuitry in our motor system learn to generate the activity patterns that drive these sequences? This question can be explored by studying a similarly precise learned motor pattern in a different organism, the learned song of the songbird zebra finch.&#13;
&#13;
Zebra finches learn to sing a stereotyped song through a process of vocal experimentation and comparison to an internal template. Every time a bird sings, it varies the acoustic parameters of its song and determines whether each variation brings the song closer to its internal template. Variations that result in a better match are then repeated in subsequent renditions of the song, in a trial and error process suggestive of reinforcement learning. The learning process requires a basal ganglia-thalamocortical loop called the anterior forebrain pathway (AFP) that is similar to basal ganglia-thalamocortical circuitry in mammals. Existing evidence suggests that the AFP learns a time-dependent bias signal that steers the motor pathway to avoid vocal errors. This bias signal is known to be dependent on the cortical output of the AFP known as LMAN (lateral magnocellular nucleus of the anterior nidopallium). However, little is known about the neural code in LMAN that underlies this bias signal, or how this neural code is learned and generated.&#13;
&#13;
We address these questions by building a neural feedback system that allows us to impose correlations between the activity of individual LMAN neurons and a dopaminergic reward signal. We designed a low-latency feedback system that records neural activity from a chronic Neuropixels 2.0 implant, extracts the activity of specific neurons, and plays noise bursts to the bird contingent on the activity of those neurons. We used this system to perform feedback based on the activity of an arbitrarily chosen neuron in LMAN within a given 10 ms window in songs. All birds responded to the feedback by learning to bias the activity of the chosen LMAN neuron up or down within the chosen time window, transiently driving firing rates up by as much as 200 Hz. We observed a remarkable degree of timing precision in the learned bias, with birds able to control the activity of the chosen neuron at single millisecond levels of rise time and jitter. This high degree of precision informs models of the basal ganglia circuit architecture thought to drive learning. We also found the learned bias to be specific to the LMAN neurons correlated with reward, with neighboring uncorrelated neurons exhibiting no change in firing rate during learning. This single-neuron specificity strongly constrains the spatial precision of axonal targeting from thalamic regions that are thought to propagate the learned bias signal from the basal ganglia to LMAN. Finally, we demonstrated that fluctuations in neural activity of a given LMAN neuron drive transient and predictable changes in vocal output approximately 25 milliseconds later, consistent with what is known about signal propagation speeds in the song system. This fact, together with the results of our feedback experiments, combine to confirm our central hypothesis that LMAN drives song learning by independently activating LMAN neurons at precise points in time in order to bias vocal output and avoid vocal errors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163548</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes</title>
<link>https://hdl.handle.net/1721.1/163547</link>
<description>Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes
Bondarenko, Lina
Modern knowledge systems have physically and conceptually “flattened” the world, erasing the ecological, political, and sensory complexities inherent to sloped terrain. By attending closely to the slope—as both a material condition and a generative metaphor—this thesis foregrounds movement as a form of resistance to regimes of exploitation, abstraction, and estrangement that have historically transformed land into data and place into property. Weaving together interdisciplinary methodologies from performance studies, landscape architecture theory, feminist geography, ecological theology, environmental history, sensory ethnography, and media studies, SSSSSSSSSS dances an inclined methodological structure, oscillating deliberately between critical systemic analysis and situated sensory experience. Ch1. sets the stage among steep slopes and introduces the discipline to movement as pedagogy, enacting the urgency for new methodologies into schemes of the project’s medium and the book’s format. Ch.2 is a feminist investigation of the ways modern infrastructures and spaces have been designed to reinforce land abstraction and commodification in the name of improvement-- severing embodied relationality, contributing to societal apathy toward ecological and social crises. Imperial post-enlightenment statecraft, the suppression of wildness, and the standardization of level form have flattened our upright movements to enact a state of senslessness. Contradicting Ch.2’s straight critique, Ch.3 attempts to reweave the sinuous nuance of symbiogenesis between soils and species, revealing that humans are but one among many sloped organisms moving, and inclining, and co-evolving as the lithosphere; we have been slorgs all along. Slorgs belong to divine mythologies of terrain’s elevations and have reciprocated in admiration, mimicking topographic spatial functions and adorning the summits with artistic interventions--some inadvertently contributing to the damaging regimes of Ch.2. Interwoven through both chapters, outliers resisting those forces of governance and exploitation are often those displaced by them-- those moving in ways the system polices and erases from comprehension-- refugees, queers, witches, tricksters, artists, herbalists, and healers. The intended medium of SSSSSSSSSS coalesces in Ch.4: inviting the general public to participatory happenings with hills, composing scores, coaxing their inner slorgs to slither askew, sloping themselves as moving loci for sympoietic becoming. Multi-species attune to a social, sensed, somatic experience, co-composing spatial relations among local steep soils. Slorgs challenge the abstractions of dominant epistemologies in the temporal, situated act of trusting their own proprioception in collective balance, affirming the multidimensional value of embodied, ecological geo-choreography. Social Sensory Somatic Scores for Soils, Structures, Spaces, and Species of Steep Slopes are presented through photographs in Ch.4 and in moving image, available as supplemental material.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163547</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Spiritual Curation of American Modernism</title>
<link>https://hdl.handle.net/1721.1/163546</link>
<description>The Spiritual Curation of American Modernism
Saha, Indrani
Where do the spiritual go? In this study of late-nineteenth- and early-twentieth-century seekers, they join seances in Vermont farmhouses, attend Theosophical lectures on Karma, get lost in copies of Jnana-Y oga, journey to Buddhist temples in China, and consume spiritual manuals on Mentalphysics. But where do they go after those encounters? And, more importantly, what do they do? In this dissertation, they build modern art institutions. A cadre of artist-writers, museum curators, and public intellectuals found their power in early 20th century America by building institutions to introduce a new, spiritually grounded modern art to a mercantile nation. In the US, beyond European sources for "the spiritual" were flirtations with vaguely "Eastern" ones by way of Theosophy. Those who sought to institutionally manifest Wassily Kandinsky's "spiritual" in art believed themselves to provide the assistance necessary to cultivate and preserve these spiritual impulses in modern art. Alfred Stieglitz's Intimate Gallery (1925-1929), Katherine Sophie Dreier's Societe Anonyme (1920-1950), and Hilla Rebay's Museum of Non-Objective Painting (1939-1952)-all in New York City-served as intermediaries in translating predominantly Eastern spiritual ideas into productive ways of being. It would be needed, each curator believed, to cultivate these spiritual protocols just to survive in a material world they held to be detrimentally bankrupt of spirit. In other words, the American institutionalization of modernism built its canon around spiritual systems of national aesthetic welfare. Crucial to these spiritual curators' respective operations would be the promotion of not just any abstraction but a radically non-objective art thought to use the inner expressions of the artist to elevate the spectator. This dissertation takes the turn-of-the-century claims of spirituality by the founders of key art institutions seriously. In doing so, I argue that esoteric forms of Eastern spirituality infused formerly Protestant centers of culture to propel a twentieth-century embrace of radically abstract modern art.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163546</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Mud: 3D Printing earth to achieve low-carbon, low-cost construction automation</title>
<link>https://hdl.handle.net/1721.1/163545</link>
<description>Programmable Mud: 3D Printing earth to achieve low-carbon, low-cost construction automation
Curth, Alexander (Sandy) McCormick
Large-scale additive manufacturing (LSAM) with locally sourced materials, such as earth, presents a promising approach to addressing the urgent challenges of rapid urbanization and construction-related carbon emissions. &#13;
This dissertation establishes a comprehensive framework for integrating low-carbon materials, particularly minimally processed earth, with computational design methodologies and robotic fabrication processes for architectural-scale applications. Through systematic material characterization, novel testing protocols, and case studies across multiple building systems, the research demonstrates that minimally processed earthen materials can be transformed into high-performance building elements uniquely suited to local environmental conditions and design considerations. The developed computational framework employs multi-objective optimization and material-aware toolpath generation to balance structural performance, thermal comfort, embodied carbon, and construction time. &#13;
Four case studies validate this approach: (1) toolpath optimization for shell structures, (2) a hybrid floor system combining shape-optimized concrete beams with 3D-printed ceramic blocks, (3) zero-waste earthen formwork for reinforced concrete, and (4) thermally optimized wall systems for passive climate control. Life cycle assessment reveals that 3D-printed earth structures have approximately one-fifth the embodied carbon of conventional concrete and one-fiftieth that of industry-standard 3D-printed mortar. This research bridges the gap between additive computational design and material circularity, offering scalable approaches to sustainable construction that can be implemented across diverse environmental and economic contexts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163545</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision</title>
<link>https://hdl.handle.net/1721.1/163544</link>
<description>Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision
Klimenko, Nikita
As the impacts of climate change on cities become more pronounced, urban authorities are under pressure to prepare existing streetscapes for increased levels of heat stress. While many aspects of existing urban morphology have an impact on heat exposure (e.g. sky view factor, glazing levels, facade materials), they cannot be rapidly changed at large across existing urban infrastructures. Urban authorities across the world increasingly turn to planting trees as a way of cooling urban streetscapes. Urban vegetation is indeed known to have a cooling effect, primarily due to trees providing shade and preventing urban materials from heating up, as well as due to their ability to maintain their own internal temperature due to evapotranspiration. Even though the positive impacts of urban trees on thermal comfort are long known and well-studied, little work is dedicated to how these impacts vary across trees of different species and morphology. This is due to both the complexity of studying vegetation life cycles at sufficient scale, as well as due to the dispersed nature of the issue across disciplines of biology, urban climate, design, and data science. Nevertheless, this specific knowledge is vital to urban planners for deciding which trees have the most cooling effect in specific parts of the city. This thesis embraces the notion of trees as ‘cooling machines’ and dissects the diverse morphological and contextual factors that affect the role of individual trees on local urban heatscape. Leveraging a set of computer vision methodologies, including species recognition, context-aware segmentation, and photogrammetry, the thesis examines a large dataset of thermal imagery of urban trees collected in Los Angeles and Dubai to describe the impact of individual tree species, height and form, as well as spatial context on the cooling effect. Building on this approach, the thesis proposes a prototyping framework for architects to cure urban heatscapes via targeted curation of tree planting schemes, tying the visual and thermal aspects of urban greenery. This approach will allow cities to leverage the power of urban vegetation in the most efficient way, and tame urban heat in a scalable and globally affordable manner.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163544</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence</title>
<link>https://hdl.handle.net/1721.1/163543</link>
<description>Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence
Dundar Arifoglu, Nasibe Nur
This thesis reconsiders architectural authorship and the extended processes through which the built environment is shaped, using a series of playful, participatory interventions to expose the human-centric assumptions embedded in spatial decision-making. Presented as a collection of games and booklets, the work invites participants to engage with a wide spectrum of architectural processes—from site understanding and planning to permitting, construction, and post-occupancy—through the perspectives of multiple agents entangled in shared environments. These agents include beings, materials, living organisms, legal frameworks, and other forces typically excluded from spatial authorship, challenging conventional boundaries and expanding the discourse around the entangled forces and relations that shape the spaces we inhabit. A series of playful explorations opens space for friction, misalignment, and shared authorship. Each booklet engages a distinct stage of the architectural process through participatory formats that make visible the biases, exclusions, and regulatory fictions often treated as neutral. By gamifying these systems, the work reveals how architectural decision-making tends to privilege hierarchy, human control, and speed—often at the expense of multispecies co-existence. This thesis positions play as a critical lens: a way to rehearse alternative futures, to listen differently, to embody other perspectives, and to surface the black-box logics embedded in architectural norms. It invites readers and players to participate in unbuilding these assumptions. And the games evolve—with each use, each misreading, each encounter, and each agent who joins the conversation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163543</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time</title>
<link>https://hdl.handle.net/1721.1/163542</link>
<description>The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time
Chaussabel, Celia Quynh-Mai
As the architectural discipline grapples with its role in resource depletion, carbon emissions, and waste generation, there is a growing urgency to stop sourcing new materials and to reuse materials from existing buildings instead. One challenge to integrating reused materials into current building practices is technical: inventorying, deconstructing, reconditioning, and designing with reused materials is slower and more labor-intensive than with new ones. But another challenge is cultural: the materials that make up architecture are currently perceived as unmoving and single-use, with little consideration for their trajectories from raw resource to landfill. This thesis is focused on developing an aesthetic sensibility and design methodology that helps us re-envision materials as objects on a trajectory instead: Objectiles, or object-projectiles. Objectiles are objects on an adventure across space-time to collect as many uses as possible. Rather than remaining associated with one primary use, Objectiles are impressionable, bearing ambiguous traces of all the uses they encounter as they re-circulate. Through the aesthetic qualities that hint at their many uses, Objectiles invite us to time travel - to imagine the potential past and future narratives that may precede or follow their present physical state. Embedding the aesthetics of Objectiles into architecture can lead to the development of a new collective consciousness of the materials that surround us. They can make us aware that all the objects around us have trajectories that extend beyond their present state, and lead to an alternative material culture of greater care in how we use, re-circulate, and dispose of all objects.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163542</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits</title>
<link>https://hdl.handle.net/1721.1/163541</link>
<description>Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits
Ai, Rui
The independence axiom (IA) proposed by Von Neumann and Morgenstern [50] is the cornerstone of the expected utility theory. However, some empirical experiments show that the IA is often violated in the real world. We propose a new kind of multi-armed bandit problem where the expectation of outcomes may influence the agent’s utility which we call expectation-dependent multi-armed bandits and rationalize the choice of agents in Machina’s paradox lacking the IA. We design provably efficient algorithms with low minimax regrets and show their consistency of time horizon T with corresponding regret lower bounds, revealing statistical optimality. Furthermore, as we first consider bandits whose corresponding utility depends on both reality and expectation, it provides a bridge between machine learning and economic behavior theory, shedding light on how to interpret some counterintuitive economic scenarios, like bounded rationality explored by Zhang et al. [54].
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163541</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differentially Private Synthetic Data Generation for Relational Databases</title>
<link>https://hdl.handle.net/1721.1/163540</link>
<description>Differentially Private Synthetic Data Generation for Relational Databases
Alimohammadi, Kaveh
Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table. In practice, data is often distributed across multiple tables with relationships across tables. This study presents the first-of-its-kind algorithm that can be combined with \emph{any} existing DP mechanisms to generate synthetic relational databases. The algorithm iteratively refines the relationship between individual synthetic tables to minimize their approximation errors in terms of low-order marginal distributions while maintaining referential integrity; consequently eliminates the need to flatten a relational database into a master table (saving space), operates efficiently (saving time), and scales effectively to high-dimensional data. We provide both DP and theoretical utility guarantees for our algorithm. Through numerical experiments on real-world datasets, we demonstrate the effectiveness of our method in preserving fidelity to the original data.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163540</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications</title>
<link>https://hdl.handle.net/1721.1/163539</link>
<description>VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications
Zhang, Chenhui
Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose VLEO-Bench, a comprehensive evaluation framework to quantify the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our framework includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163539</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art</title>
<link>https://hdl.handle.net/1721.1/163538</link>
<description>She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art
Feng, Haozhen
This thesis investigates the collective lives of Chinese women sent to Xinjiang in state-led migration after 1949 and the erasure of their gendered narratives. Drawing on a unique family history and archival evidence, the thesis reveals how the personal identities of these female “Aid to Xinjiang” participants were stripped away and subsumed under the grand socialist nation-building myth. Through practice-based artistic research, the project attempts to restore their lost voices and unacknowledged suffering and labor, framing the exhibition as a form of praxis. By analyzing the exhibition alongside case studies and critical analysis, the thesis, inspired by Bernard Stiegler’s theory of the “history of representational forms” and interwoven with ideas from philosophers like Judith Butler and Nicholas Mirzoeff, interrogates the gendered silences in official history and highlights the tension between state mythologies and personal memories. In doing so, the exhibition as an interdisciplinary form of research not only restores agency to a silenced group of women, but also demonstrates how artistic practice can serve as an alternative historiography to challenge dominant narratives and recover marginalized voices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163538</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles</title>
<link>https://hdl.handle.net/1721.1/163537</link>
<description>Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles
Pryal, Erik Jeffrey
Due to their energy-constrained nature, Autonomous Underwater Vehicles (AUVs) need effective docking and charging stations to extend their mission durations. However, diverse AUV designs challenge the universal compatibility of docking stations. This study provides a framework for understanding what makes a docking station universal and offers two potential solutions: the Tapered Funnel Docking Station and the Magnetic Hub Docking Station. The Tapered Funnel features a conical entry that progressively narrows to accommodate various AUV diameters. The Magnetic Hub passively secures the AUV using magnetic forces and an external appendage guided into position by a square duct. MATLAB simulations evaluate these two charging station designs for compatibility with AUVs, alignment capabilities, and docking efficacy under realistic conditions. Both designs are tested through Monte Carlo simulations to address varying AUV approach conditions, showcasing their potential as universally feasible solutions. Future exploration into material durability, sensor integration, and power transfer efficiency will refine these designs for field applicability. This research lays the groundwork for universal docking standards and proposes adaptable solutions to alleviate operational limitations in underwater missions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163537</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity</title>
<link>https://hdl.handle.net/1721.1/163536</link>
<description>Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity
Blowes, Rachel
In the context of the global climate crisis, there is a need to develop low embodied carbon building systems. Moreover, construction and demolition generate substantial amounts of waste. The use of salvaged materials for structural applications presents the opportunity to divert this waste while reducing the embodied carbon of new structural components. This thesis proposes a typology for dowel-laminated timber (DLT) slabs built up from waste lumber offcuts. A mechanical model for a segmented DLT system composed of geometrically heterogeneous offcuts is developed. Prototypes of this mass timber system are fabricated and tested to observe their failure behavior and to evaluate the mechanical model. A computational workflow is introduced which employs algorithmic methods for inventory assignment and structural optimization to design slabs which meet deflection requirements under loading. These approaches are undertaken to evaluate whether DLT systems can leverage the irregularity of salvaged lumber dimensions to produce structurally efficient forms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163536</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time</title>
<link>https://hdl.handle.net/1721.1/163535</link>
<description>Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time
Aubry, Vinzenz
This thesis proposes a conceptual lens for understanding contemporary generative arts by introducing the terms Allopoietics and Liquid Media. Building on generative and participatory art, it focuses on the real-time processes among artworks, publics, spaces, and time through which meaning dynamically emerges. Drawing on the author’s artistic works—Conjunktion, Looking at the Sun, and Public Eyes—as well as critical engagement with hermeneutics, process philosophy, and media theory, this thesis explores how agency is distributed across these processes, offering a means to reconsider all elements as equally generative. Allopoietics, derived from cybernetics, describes the generative capacity of systems to produce outcomes beyond the sum of their actants, emphasizing collective unfolding over isolated creation. Liquid Media expands the notion of interfacing beyond traditional media to include publics, space, and time, conceptualizing these as mutable and entangled actants. These concepts outline an Aesthetics of Real Time that evaluates the dynamic relations among increasingly immediate systems. By proposing these new terms, the thesis invites a shift in perspective from object to process: viewing artworks not as stable materializations but as parts of real-time systems of collective meaning-making. While emerging from an artistic practice, this conceptual framework resonates with insights from contemporary sociology and cultural studies, where notions of fluidity, distributed agency, and relationality increasingly shape our understanding of complex systems and realities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163535</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Limits of Longevity</title>
<link>https://hdl.handle.net/1721.1/163534</link>
<description>The Limits of Longevity
Rodriguez, Christopher W.
Do all animals age? Although aging seems to be a widespread phenomenon, some demographic studies have failed to find evidence of aging in certain species, including some highly regenerative species of planarians and Hydra that reproduce through asexual fission. However, all demographic studies have limits on observation times and sample sizes, so it is unknown if these failures were because of an actual absence of aging or these inherent study limitations. Some argue that these species must be ageless. Because of pressures that result from the lack of a clean division between the germ line and the soma in fissiparous organisms, agelessness becomes necessary as a prerequisite of this kind of reproductive strategy. Others argue that fundamental theories of the evolutionary biology of aging absolutely preclude agelessness. Even putting evolutionary arguments aside, some mathematical models of cellular competition and senescence argue that agelessness is impossible mechanistically in multicellular organisms. In this work, I address evolutionary and mechanistic arguments for and against agelessness. I develop mathematical models of the Disposable Soma Theory that incorporate facets of the arguments for agelessness in asexual fissioning organisms. I construct models of mutation accumulation and drift within an individual and explore how this genetic decay could manifest in the mortality rates. I use these models to understand if aging is inevitable generally and apply them to planarians and Hydra to seek to estimate the likelihood of aging more narrowly in those specific cases. Contrary to other work, I find that agelessness (defined as non-increasing mortality rates in a population) is indeed possible as the optimal evolutionary strategy for multicellular organisms. However, the evolution and mechanistic realization of agelessness requires conditions that are unlikely to be met in any existing species. In the case of planarians and Hydra, they likely do not face the right kind of evolutionary pressure to completely avoid aging. Even if they do face necessary evolutionary pressure, intraindividual genetic decay will almost certainly induce increasing mortality on the population with little recourse. Therefore, these species likely do age, although they could have median lifespans on the order of hundreds or perhaps even thousands of years, which would make detecting aging in any given population study quite difficult indeed.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163534</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental representations of regions and interactions in spatial&#13;
transcriptomics</title>
<link>https://hdl.handle.net/1721.1/163533</link>
<description>Fundamental representations of regions and interactions in spatial&#13;
transcriptomics
Maher, Kamal M.
While cells are often considered the fundamental unit of biology, it is their spatial coordination that gives rise to the tissue architectures underlying both health and disease. Spatial transcriptomics technologies offer a unique window into this coordination by simultaneously capturing the spatial and molecular identities of individual cells, providing unprecedented insight into tissue organization. However, the computational landscape for analyzing tissue structure remains fragmented, with a wide array of disparate methods. In this work, we aim to distill these approaches into a unified quantitative framework for analyzing tissue architecture. Tissue structure can be represented in terms of anatomical regions as well as the cellcell interactions that occur within them. For regional tissue organization, many existing methods—including those based on probabilistic models and graph neural networks—ultimately perform a form of smoothing, or local averaging of gene expression across neighboring cells. This process emphasizes large-scale spatial variation and enables standard single-cell analysis workflows, such as clustering and trajectory inference, to be applied in spatial contexts. However, we find that naive smoothing introduces artifacts that obscure meaningful spatial features. To address this, we introduce a minimal but powerful modification: subsampling within each neighborhood prior to averaging. This approach enhances spatial feature resolution and generalizes conventional analyses to spatial features: clustering identifies multicellular regions; data integration aligns spatial regions across samples and technologies; and trajectory inference captures spatial gradients. We also show that this subsampling strategy improves the performance of more complex downstream methods. To further generalize our framework, we formalize the joint analysis of tissue regions and multiscale cell-cell interactions using signal processing over graphs: low-frequency components represent regional gene expression patterns across a tissue mesh; high-frequency components capture fine-scale, cell-cell interactions; and mid-frequency signals correspond to boundaries between regions and diffusive signaling. By interpreting spatial gene expression in this spectral framework, we provide a principled way to bridge conceptual and computational perspectives on tissue structure. Ultimately, this work serves as both a theoretical foundation to understand existing methods and a roadmap for developing future approaches to quantitatively describe molecular tissue architecture.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163533</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a Functional in Vitro Model of the Neuromuscular Interface</title>
<link>https://hdl.handle.net/1721.1/163532</link>
<description>Developing a Functional in Vitro Model of the Neuromuscular Interface
Schwendeman, Laura A.
The neuromuscular system is responsible for the coordination of movement throughout the body, and while research has revealed many of the mechanisms involved in the function of the neuromuscular system, there are still many gaps in our understanding of how all of the components of the system work and how they are affected by environmental factors and disease. This work focuses on developing methods and an in vitro model for studying a subsystem of the neuromuscular system known as the neuromuscular junction (NMJ), which is the connection between skeletal muscle and motor neurons and is relevant in many neuromuscular degenerative diseases. This work identifies that current in vitro NMJ models are cohesively lacking the ability to support long-term, functionally contractile muscle tissue while providing compartmentalization and clear optical access for live imaging of muscle and motor neuron co-cultures. This work therefore presents STAMP, a microgroove patterning method for creating aligned, more physiologically relevant, functional, and optically accessible skeletal muscle tissue cultures on top of fibrin hydrogels. Through investigating a series of different sizing parameters, STAMP is shown to effectively align mouse and human skeletal muscle monolayers in vitro and influence the direction of muscle contraction under electrical and optogenetic stimulation while preserving skeletal muscle tissue integrity and viability. The STAMP approach provides a way to mold hydrogels and the morphology of muscle tissue and will be beneficial for addressing the need for compliant and optically clear substrates in modeling the neuromuscular junction.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163532</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer</title>
<link>https://hdl.handle.net/1721.1/163531</link>
<description>Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer
Sonner, Jessica E.
Female soccer players demonstrate high levels of agility but remain underrepresented in research and experience anterior cruciate ligament (ACL) tears two to eight times more frequently than their male counterparts [1]. These injuries are often associated with high-torsion movements at the knee, such as quick change-of-direction maneuvers in soccer [2]. To examine gender-based differences in agility, this study introduces an in-game metric based on change-of-direction speeds, derived from center-ofmass tracking data from the 2022 Men’s and 2023 Women’s FIFA World Cups. Results show that across positions, ball proximity, and game segments, female athletes tend to change direction both faster and more frequently than male athletes—supporting current injury hypotheses and informing gender-specific cleat design considerations. Beyond individual movement, this study also examines collective team behavior through a fluid mechanics lens. No significant gender differences were found in power spectral densities or second-order structure functions, suggesting symmetry in the underlying coordination dynamics. A direct cascade was observed in the 0–15m range, indicating a consistent transfer of energy across spatial scales. Team dispersion and the Area-Dominant Spread Index correlated with structure function slopes, bridging spatial metrics with turbulence-based models of group behavior.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163531</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deciphering Features of Protective or Maladaptive Cellular Immunity in the Airways Following Primary and Repeated Pathogen Exposure</title>
<link>https://hdl.handle.net/1721.1/163530</link>
<description>Deciphering Features of Protective or Maladaptive Cellular Immunity in the Airways Following Primary and Repeated Pathogen Exposure
Bromley, Joshua David
The human respiratory tract is constantly subject to environmental stressors and perturbations that cause deviations from homeostatic conditions. The airway’s cellular constituents – epithelial, stromal, and immune cells – maintain local and global homeostasis by facilitating gas exchange and providing a barrier against noxious environmental agents (e.g., xenobiotics, allergens, toxins, and microbes). Infection with viral, microbial, and eukaryotic pathogens can disrupt airway homeostasis, leading to local and systemic inflammation, which can either contribute to the clearance or persistence of the pathogen. Prior antigenic exposure - prophylactically or from a previous infection - can promote transient and long-lived changes in cellular epigenetics, gene expression networks, and cell type composition that may contribute to protective (or maladaptive) immunity; however, we lack a complete understanding of the pathogen and cellular determinants that modulate immunity upon reinfection. In this thesis, we employed single-cell RNA-seq (scRNA-seq), computational methods, and microbial assays to discover the host and pathogen determinants governing airway homeostasis during primary infection and reinfection at barrier sites where the infection begins and may persist: the nasopharynx, airways, and lung parenchyma. First, we leveraged scRNA-seq to identify the cellular and molecular features of mild, moderate, and severe COVID-19, revealing that persons with severe COVID-19 have blunted anti-viral immunity in the nasopharynx. We further extended these findings by profiling nasopharyngeal swabs from vaccinated and unvaccinated individuals across three waves of SARS-CoV-2 variants, revealing shifts in viral tropism and that intramuscular COVID-19 vaccines promote the recruitment of putative antigen presenting macrophages to the nasal mucosa. Next, we used rhesus macaques to interrogate temporal host-pathogen interactions during SARS-CoV-2 infection and reinfection in the lower respiratory tract. This work identified innate training-like gene programs among myeloid populations that provided enhanced protection against SARS-CoV-2 reinfection. Finally, we used cynomolgus macaques as a model to study Mtb infection and reinfection, demonstrating that CD4+ T cells are required to restrict bacterial growth and induce protective immunomodulatory gene programming and cell-cell interaction networks in pulmonary granulomas formed following Mtb reinfection. These findings extend beyond long-held paradigms of protective TB immunity, revealing that CD4+ T cells regulate pro- and anti-inflammatory granuloma equilibria. Collectively, the work presented in this thesis highlights the utility of single-cell genomics for studying respiratory infection- and immuno-biology and provides a framework for contextualizing pathogen-induced deviations from biological homeostasis in the airways, which has implications for the development of prophylactics and therapeutics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163530</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expansion Microscopy of Extracellular Space for Light Microscopy-Based Connectomic Analysis</title>
<link>https://hdl.handle.net/1721.1/163529</link>
<description>Expansion Microscopy of Extracellular Space for Light Microscopy-Based Connectomic Analysis
Emenari, Amauche
In this dissertation, we present an exploratory methodology, termed expansion microscopy of extracellular space (ExECS), designed to enhance the visualization of the extracellular space (ECS) within aldehyde-fixed tissue. This technique leverages the principles of expansion microscopy (ExM), a method that facilitates nanoscale imaging on conventional microscopes through physical magnification of specimens, thereby supporting improved visualization of various cellular and tissue components including proteins, nucleic acids, and lipids 1. The ECS forms a continuous environment between cells2. Its presence throughout neural tissue makes it an attractive target for contrast-based techniques such as shadow imaging, where the ECS is selectively labeled to produce negative contrast, revealing cell shapes and boundaries as unlabeled silhouettes within a labeled background. Although ECS delineation in fixed tissue is limited by the fidelity of fixation and may not fully reflect its live-state structure, the resulting contrast with the intracellular environment may offer useful contrast for investigating neural morphology and connectivity, offering a useful approximation of network organization. A key component of the ExECS methodology is the introduction of a customengineered ECS Filler solution. This formulation, detailed later, includes a macromolecular probe intended to serve as a proxy for the ECS. When applied to aldehyde-fixed tissue, the filler is designed to diffuse throughout the sample, preferentially occupying extracellular compartments while remaining largely excluded from intracellular regions. This selective distribution is expected to persist even in areas where aldehyde fixation may have increased membrane permeability. This diffusion behavior is presumed to result from a combination of size-based exclusion and intermolecular interactions between the hyaluronan polymers, which form the main component of the filler solution, and the plasma membrane. The constituent hyaluronan is functionalized with amine groups to enable covalent crosslinking and with azide groups to allow fluorescent tagging via click chemistry. These modifications are intended to enable the ECS filler to act as a contrast agent by labeling the extracellular space, providing a foundation for a shadow-based imaging strategy to delineate morphology of cellular structures. In parallel, we introduce a lipid-targeted form of ExM, termed membrane expansion microscopy (mExM). This approach employs a custom chemical tag that enables nanoscale optical imaging of lipid membranes using a lipid-optimized expansion protocol. mExM, via a novel post-expansion antibody labeling protocol, enables protein-lipid relationships to be imaged in intracellular organelles. This technique may offer new opportunities to examine aspects of neural circuitry by linking cellular morphology with molecular identity. Together, ExECS and mExM offer a potential basis for a light microscopy-based framework for connectomic reconstructions. Unlike traditional electron microscopy approaches, which are labor-intensive and low-throughput3, this strategy aims to improve throughput in mapping of neuronal morphology with enhanced resolution that surpasses diffraction limitations. With the aim of bridging the gap between tissue ultrastructure and optical accessibility, this work may contribute to efforts toward scalable, high-resolution analysis of neural tissue organization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163529</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Disease Drivers Through Single-Cell Omics and Scalable Phenotypic Screens</title>
<link>https://hdl.handle.net/1721.1/163528</link>
<description>Decoding Disease Drivers Through Single-Cell Omics and Scalable Phenotypic Screens
Liu, Nuo
At the heart of any human disease is an imbalance between normal and aberrant physiological processes— a disproportion between hypo-immunity and hyperimmunity—a lack of homeostasis. In many cases, a more comprehensive understanding of the molecular basis underlying disease progression and therapeutic failure is still required to devise new strategies for improving patient outcomes. Technological advancements in biomedical research, especially in single-cell omics (e.g. single-cell RNA sequencing, single-cell spatial profiling) have given us unprecedented power to decipher the intricate cellular and molecular features that maintain—or disrupt—this balance. However, validating the causality of these features remains a huge challenge, as the wealth of data often results in a considerable number of hypotheses to test. In this thesis, I explore applications of single-cell genomics tools to understand cellular features associated with disease, with a particular focus on tuberculosis (TB). I then present a potential solution for performing phenotypic screens at scale. In the first part, I applied single-cell RNA sequencing and analysis to human lung samples from a TB-endemic region in South Africa. Using contrastive analysis, I identify key cell populations that are differentially abundant between TB-diseased and TB-negative lung including several neutrophils, macrophages, and fibroblasts subsets. I discovered a de novo gene program highly enriched in the MMP1+CXCL5+ Fibroblast that correlates with TB burden in a non-human primates (NHP) granuloma dataset, supporting the importance of this subset in TB. In a collaborative effort, we validate that this MMP1+CXCL5+ Fibroblast localizes to TB granuloma on independent TB-diseased lung tissues using immunohistochemistry assays and recapitulate the induction of this population from lung-derived fibroblast through in vitro stimulation experiment with M.tb. I further report a SPP1+ macrophage population that is enhanced in TB diseased lungs through single-cell analysis. Moreover, I identified a prominent cross talk between SPP1+ macrophages and fibroblasts in TB diseased lung that mimics similar observations in cancer and fibrosis, supporting an important role for this axis in TB. These distinctive cell populations could serve as potential targets for novel host-directed therapies in tuberculosis. In the second part, I developed a method to compress small molecule phenotypic screens by designing randomized drug pools with replicates of distinct candidates across different drug pools. Our team demonstrated that linear regression models can be applied to computationally deconvolute the individual hits, enabling the identification of top effectors for downstream validation. We benchmarked and demonstrated the efficacy of this approach in a cost-effective imaging platform and then moved into applications on pancreatic ductal adenocarcinoma (PDAC), where we discovered a new perturbation response signature to IL-4/IL-13 with prognostic value for patient survival. We also showcased the utility of this tool on understanding immunomodulation effects in heterogenous mixtures of primary blood cells. Together, this thesis describes novel cellular features important to TB in human lungs, offering new insights that complement existing knowledge from animal models. It also presents a bold, yet effective strategy to scale up phenotypic screen across different biological systems, providing a much-needed solution that bridges the translational gap between human disease and experimental model.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163528</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leaky Vessels</title>
<link>https://hdl.handle.net/1721.1/163527</link>
<description>Leaky Vessels
Cong, Frank (Haotian)
This thesis serves as a written synthesis of my art practice. It starts with Louis Pasteur’s swan neck flask, Robert Boyle’s air pump, the theater of proof, and cabinets of natural historians to discuss the intentional gesture of containment, exclusion, and controlled permeability in scientific containers and the knowledge production paradigm behind them. I argue that these containers possess another intrinsic gesture – to leak – that opens space for social and cultural dimensions to engage. I propose “leaky vessels” as an analytical tool and a methodology that foregrounds the tension between intentional and unintentional in order to attend to the issues of care, belief, and labor that arise within this dynamic. Chapter 2 develops the concept of “leaky” in three aspects – aesthetic intervention, historical residue, institutional sabotage – by analyzing art practices by Eve Andrée Laramée, Oron Catts and Ionat Zurr, Candice Lin, Maria Thereza Alves, Critical Art Ensemble, and Claire Pentecost. Each case demonstrates how alternative approaches to apparatuses can expose and unsettle the systems of control that govern knowledge authority, allowing seepage, contamination, and embodied histories to return to spaces designed to exclude them. Chapters 3 and 4 turn inward to examine my own art practice, Guardian and The Guarded (2024), RapidRise (2024), and Sweat Dough (2025). In Chapter 3, I discuss the experience of entering the biomaker space at MIT and cultivating animal cells in a pendant, interrogating how care, proximity, and cosmology might challenge the lab’s sterile and utilitarian logic. Chapter 4 discusses the other two projects that operate outside the lab, where I investigate how bodily entanglement with dough fermentation can leak into the broader context of food cultures, labor histories, and symbolic inheritance. Together, these chapters propose a practice that embraces contamination and relationality. Those that leak in and leak out are precisely where new layers of meaning reside.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163527</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven and Dynamically Feasible Trajectory Generation for Real-Time Powered Descent Guidance and Robotic Exploration</title>
<link>https://hdl.handle.net/1721.1/163526</link>
<description>Data-Driven and Dynamically Feasible Trajectory Generation for Real-Time Powered Descent Guidance and Robotic Exploration
Briden, Julia
Increasingly complex and high-mass planetary missions require autonomous long-horizon trajectory generation to achieve dynamically feasible powered descent guidance. While analytical and indirect methods are computationally efficient, significant simplifications of the dynamics and constraints are required for both problem formulations. Numerical optimization algorithms enable minimum-energy trajectory generation subject to system dynamics and safety constraints but currently remain computationally infeasible on flight-grade processors, taking seconds to minutes to compute a single trajectory. The objective of this dissertation is to develop new algorithms to advance the state of the art in trajectory optimization and planning for autonomous systems. Due to the limited computational abilities of radiation-hardened processors and an increased need for spacecraft and robotic autonomy, specialized algorithms capable of running in realtime constitute enabling technologies for space exploration. Three major contributions are developed in this dissertation. First, a transformer neural network-based algorithm is created to predict the tight constraints that recover the solution and parameter sets for constrained optimization problems. By training on prior runs of the numerical optimization solver, the learned mapping can construct a reduced problem formulation that recovers the optimal solution while reducing runtime by up to an order of magnitude. Second, a method to embed problem-specific information into the neural network training process was developed. By embedding the Lagrangian and Lagrangian gradient merit functions into the training process, neural network-generated control policies are biased toward constraint satisfaction. Third, an autonomous hybrid targeting and guidance algorithm was designed to utilize probabilistic risk maps and numerical optimization to select and navigate to minimum-risk landing sites. Applications in planetary powered descent and landing, as well as rover path planning, are used to benchmark algorithm performance.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163526</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The simulation of a multi-product, multi-department factory</title>
<link>https://hdl.handle.net/1721.1/163522</link>
<description>The simulation of a multi-product, multi-department factory
Levy, Donald Stephen.
Thesis: B.S., Massachusetts Institute of Technology, School of Industrial Management, 1964
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163522</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative elastic and plastic analysis and design of steel frames</title>
<link>https://hdl.handle.net/1721.1/163521</link>
<description>Comparative elastic and plastic analysis and design of steel frames
Padilla Valenzuela, Rodolfo Augusto.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1960
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163521</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of torpedo depth control</title>
<link>https://hdl.handle.net/1721.1/163520</link>
<description>Dynamics of torpedo depth control
Carleton, John Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1992; Includes bibliographical references (leaf 72).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163520</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum theory of mode locking.</title>
<link>https://hdl.handle.net/1721.1/163519</link>
<description>Quantum theory of mode locking.
Lang, W. R.
            (W. Roy)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: leaves 88-90.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163519</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of the engineering aspects of a wind tunnel magnetic suspension system</title>
<link>https://hdl.handle.net/1721.1/163518</link>
<description>An investigation of the engineering aspects of a wind tunnel magnetic suspension system
Chrisinger, John Edvil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 62).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163518</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extrageniculate and extrastriate affiliates of the geniculocortical pathway in the cat</title>
<link>https://hdl.handle.net/1721.1/163517</link>
<description>Extrageniculate and extrastriate affiliates of the geniculocortical pathway in the cat
Berson, David Matthew.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Psychology, 1980; Vita.; Bibliography: leaves 114-126.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163517</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New theoretical methods for the study of the electronic structure of solids.</title>
<link>https://hdl.handle.net/1721.1/163516</link>
<description>New theoretical methods for the study of the electronic structure of solids.
Mele, Eugene John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163516</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A financial history of the Boston elevated</title>
<link>https://hdl.handle.net/1721.1/163515</link>
<description>A financial history of the Boston elevated
Stallman, Edward B.; Bush, Horace McM.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1926; Includes bibliographical references (leaf 34).
</description>
<pubDate>Fri, 01 Jan 1926 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163515</guid>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/163461</link>
<description>Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation
Palleiko, Andrew
Imitation learning is a popular approach for obtaining intelligent robotic policies by learning from human demonstrations. Within this field, there is significant interest in the development of multi-task architectures that can efficiently learn diverse sets of tasks. Skill-based imitation learning methods, which abstract action sequences into ``skill'' representations for planning, offer structural advantages for handling the challenges of multi-task imitation learning that make them an attractive option for this problem. This work presents a novel skill-based imitation learning architecture formulation, with a causal transformer VAE skill-abstraction network paired with an autoregressive transformer planning policy. We find that our skill-abstraction network shows promise in identifying meaningful skills, but that the chosen planning architecture is poorly suited for predicting these skills due to multimodality in the resulting latent space. This is followed by a set of evaluations applied to an existing skill-based method with comparisons to a non-skill-based network on a multi-task dataset. We systematically investigate the performance impacts of six different policy and dataset conditions: data quantity, task variety, retry behavior, control precision, goal representations, and zero-shot transfer. Our experiments reveal limited increases in skill-based policy performance with more demonstrations or task variety, but improvements across architectures through exposure to demonstration retry behavior. Overall, the skill-based architecture demonstrates superior robustness to goal representation variations and low-level process noise than the non-skill-based policy, while neither architecture achieves meaningful zero-shot generalization to novel task combinations. These findings provide insights into the current state of IL methods, with the additional goal of establishing a framework for the evaluation of future multi-task IL architectures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163461</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance</title>
<link>https://hdl.handle.net/1721.1/163460</link>
<description>Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance
Huang, Dingcheng
In modern human-robot collaboration (HRC) applications, multiple perception modules jointly extract visual, auditory, and contextual cues to achieve comprehensive scene understanding, enabling the robot to provide appropriate assistance to human agents intelligently. While executing multiple perception modules on a frame-by-frame basis enhances perception quality and information gains in offline settings, it inevitably accumulates latency, leading to a substantial decline in system performance in streaming perception scenarios. Recent work in scene understanding, termed Relevance, has established a solid foundation for developing efficient methodologies in HRC. However, modern perception pipelines still face challenges related to information redundancy and suboptimal allocation of computational resources. Drawing inspiration from the relevance concept and the inherent sparsity of information in HRC events, we propose a novel lightweight perception scheduling framework that efficiently leverages output from previous frames to estimate and schedule necessary perception modules in real-time. Our experimental results demonstrate that the proposed perception scheduling framework effectively reduces computational latency by up to 27.52% compared to conventional parallel perception pipelines, while also achieving a 72.73% improvement in MMPose accuracy and comparable YOLO accuracy. Additionally, the framework demonstrates high keyframe accuracy, achieving rates of up to 98% in dynamic scenes. The results validate the framework’s capability to enhance real-time perception efficiency without significantly compromising accuracy. Additionally, the framework shows potential as a scalable and systematic solution for multi-modal streaming perception systems in human-robot collaboration.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163460</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fracture Mechanics of Networks</title>
<link>https://hdl.handle.net/1721.1/163459</link>
<description>Fracture Mechanics of Networks
Hartquist, Chase M.
Networks of interconnected materials permeate throughout nature, biology, and technology due to exceptional mechanical performance. Despite the importance of failure resistance in network design and utility, no existing physical model effectively reconciles strand mechanics and connectivity to predict fracture in diverse networks that constitute polymeric, architected, and biological materials. While traditional models predict the intrinsic fracture energy – the minimum energy to propagate a crack per unit area – of a polymer network is the energy to rupture a layer of chains, they can underestimate experiments by up to two orders of magnitude. In Part I, we show that the intrinsic fracture energy of polymer-like networks stems from nonlocal energy dissipation. We then reveal a general scaling law that captures nonlocal energetic contributions and connects strand mechanics with topological connectivity to universally predict the intrinsic fracture energy of stretchable networks. We measure intrinsic fracture energy using experiments and simulations of 2D and 3D networks with various strand constitutive behaviors, defect densities, strand length distributions, lattice topologies, and length scales. Results show that local strand rupture and nonlocal energy release contribute synergistically to the measured intrinsic fracture energy in networks. These effects align such that the intrinsic fracture energy scales independent of the energy to rupture a strand; it instead depends on the strand rupture force, breaking length, and connectivity. In Part II, we present a model for real polymer fracture and design elastomers with highly regular connectivity. End-linking then deswelling star polymers produces a class of elastomers with low defects and no trapped entanglements, enabling ultrahigh straininduced crystallinity of up to 50% and stretchability that scales beyond the saturated limit. These features promote a pronounced elastocaloric cooling effect and enable reversible two-way tuning of thermal conductivity by strain or temperature modulation. The mechanical and thermal properties of these polymer networks offer promise in addressing challenges in clean energy, thermal management, and biomedicine. Our findings establish a physical basis for understanding network fracture and design principles for fabricating tough polymeric, biological, and architected materials across multiple length scales for advanced applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163459</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Sit-to-Stand Transition using Koopman Lifting Linearization and Human State Estimation</title>
<link>https://hdl.handle.net/1721.1/163458</link>
<description>Modeling the Sit-to-Stand Transition using Koopman Lifting Linearization and Human State Estimation
Bell IV, John H.
The Sit-to-Stand (STS) transition is one of the most dangerous daily activities for the elderly population, as it is one of the situations in which falls occur most often. Despite its risks, STS dynamics remain poorly understood, and current STS assistance devices fail to utilize knowledge of STS dynamics to effect their support. This thesis presents contributions to the dynamic modeling of STS and to human-robot collaboration for improving robotic assistance of STS. To coherently capture the multi-phase nature of STS, lifting linearization, a dynamic modeling methodology inspired by Koopman operator theory, to subsume segmented local dynamics in a globally linear dynamic model. A novel class of lifting linearization basis functions, termed “State-Membership Product (SMP)” observables, enables both the seamless blending of local dynamics into a global model, and the direct extraction of phase-specific behaviors from the global model. It is shown that an SMP-Koopman linear model tuned to published data of STS experiments is capable of reproducing the multi-phase STS dynamics with a single linear model. Building on this framework, STS is additionally modeled as a lifted linear feedback control system, composed of an SMP-Koopman-based open-loop biomechanical model of the human body and a linear quadratic regulator (LQR) which guides the body to stand up. The LQR controller, tuned to replicate STS motion, guides the human body model through the phases of STS without explicit phase-switches, improving system robustness. To enhance human-robot collaboration in STS assistance, a framework for estimating patient cooperativeness is also introduced, leveraging a simplified dynamic model and an Extended Kalman Filter. By analyzing a human’s initial response to applied physical and verbal cues, the estimation framework assesses willingness to engage in assisted STS. Together, these contributions advance both the modeling and estimation of STS, offering insights crucial for the development of safe, effective robotic assistance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163458</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verse and Reversal: The Poetic Return to the Inner Child as Black Revolutionary Praxis</title>
<link>https://hdl.handle.net/1721.1/163457</link>
<description>Verse and Reversal: The Poetic Return to the Inner Child as Black Revolutionary Praxis
Dunnell, Kaelyn
Black revolutionary movements historically have centered the role of the Black child—as either foundation, visionary, or representation of Black liberation. The identity of any given revolutionary movement is characterized by three tenets: resistance, imagination, and love. In order for the individual to uncover the origin of these three tenets for themselves. This thesis is about the poiesis of the revolutionary—the making and re-making of the revolutionary—and in it I argue that the very process of forming revolutionary identity is poetic. I coin the phrase poetic revolutionary to capture that process, which involves tapping into the font of revolutionary soulfulness, which is one’s inner child or the voice and experience of the Black child. The literature guiding this analysis is from June Jordan’s archive hosted at Schlesinger Library, with Voice of the Children, a children’s publication edited by Jordan, as one of the most notable works. I examine June Jordan as the model of the Black revolutionary who has uncovered the language of her child, and I also examine the works of the children she worked with (whose 13– 15-year age ranges, notably, are on the cusp of the definition of childhood that I adopt in this thesis—more in Section I). I gather evidence from workshop diary entries written by Jordan and by her students, poetry excerpts from Voice of the Children, and Jordan’s own writing from her childhood and beyond to support my theory of the poetic revolutionary.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163457</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Human Balance Performance and Control to Inform Therapy</title>
<link>https://hdl.handle.net/1721.1/163456</link>
<description>Quantifying Human Balance Performance and Control to Inform Therapy
Shiozawa, Kaymie S.
Maintaining balance is essential for daily activities and overall health. However, balance capability often declines with age or due to health conditions such as stroke, increasing fall risk. Falls among older adults are a major public health concern, affecting 14 million older adults annually in the US and directly causing over 40,000 deaths. Timely and accurate assessment of balance impairment is crucial to prevent falls and promote independence. Current assessments rely heavily on subjective therapist evaluations, underscoring the need for objective, quantitative methods. With the growing strain on healthcare systems due to an aging population, continuous at-home balance monitoring is also increasingly important. Additionally, a comprehensive understanding of the motor control mechanisms that deteriorate with aging or disease is crucial for informing therapy methods and technologies. &#13;
&#13;
The goal of this thesis was to develop and validate methods that quantify quiet balance ability and control in unimpaired and impaired human participants. The first part focuses on assessing balance ability, the capacity to maintain upright posture during quiet stance that is currently often quantified by measures of body sway. A review of the strengths and limitations of current clinical and instrumented balance assessments highlighted a critical need for continuous assessment methods that enable objective monitoring of balance function outside of clinical settings. Addressing this need, a novel algorithm that quantifies balance ability using only force and motion sensors embedded in an instrumented cane was developed. Well-established balance measures were successfully estimated in both younger and older adults, demonstrating the proposed method's potential to facilitate continuous balance monitoring in real-world environments.&#13;
&#13;
The next part focuses on identifying balance control strategies. The novel intersection-point analysis, based on foot-force direction and point of application, was used in conjunction with a simple biomechanical model and an optimal controller to quantify balance control. The first study demonstrated that unimpaired quiet balance in a challenging environment was best described by a controller that maintained minimal effort by adjusting relative ankle and hip joint torques. Applying this method to aging populations in a subsequent study revealed that older adults rely more on neural feedback, possibly to compensate for muscle strength deficiency. This study also quantified individual balance controllers, highlighting the method's potential as a diagnostic tool for aging populations. Finally, the model was extended to describe balance control after stroke. The results suggest that the non-paretic limb compensated for the paretic limb's abnormal coordination pattern by strongly favoring neural feedback. As one of the first studies to model quiet balance after stroke, this work lays the foundation for future efforts on studying balance impairments. The contributions of this thesis are instrumental to enhancing at-home monitoring, advancing clinical practices, and reducing fall-related injuries, ultimately improving quality of life for aging and neurologically impaired populations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163456</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformable Object Manipulation with a Tactile Reactive Gripper</title>
<link>https://hdl.handle.net/1721.1/163455</link>
<description>Deformable Object Manipulation with a Tactile Reactive Gripper
Sunil, Neha
Manipulating deformable objects remains a fundamental challenge in robotics, as techniques developed for rigid objects often fail to generalize. Deformable objects exhibit infinite-dimensional configuration spaces, frequent self-occlusion, and high model uncertainty, making global state estimation and predictive modeling unreliable. To address these challenges, we propose a perception-driven framework that combines global visual understanding with local tactile feedback. Rather than modeling the full configuration of the object, we leverage local constraints, grounded in modular visual and tactile representations, to enable robust, reactive, and generalizable manipulation. The primary contributions of this work include: • Chapter 2: Cable Following. A tactile control strategy for in-hand cable manipulation that decouples contact regulation from object pose control, enabling fast, reactive sliding and closed-loop plug insertion using only local tactile feedback. • Chapter 3: Towel Edge Tracing. An extension of contact-based control to fabric edge following and the learned tactile perception networks to support this capability. • Chapter 4: Visuotactile Grasp Affordance. A grasp affordance model trained in simulation and refined with tactile self-supervision, enabling high-confidence edge grasping on towels. • Chapter 5: Dense Object Correspondence. A confidence-aware dense descriptor representation. Supports correspondence across crumpled and symmetric garments in air and on a table. • Chapter 6: Behavior Architecture and Planning Interfaces. Integration of perception modules into a reactive, confidence-based folding system and an exploration of how dense descriptors can interface with demonstrations, language, and task and motion planning. Collectively, these contributions show that global state estimation and dynamics prediction are not required for reliable deformable manipulation. Instead, semantically meaningful local interactions, guided by modular visual and tactile representations, can drive scalable, long-horizon behaviors across varied objects, configurations, and tasks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163455</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems</title>
<link>https://hdl.handle.net/1721.1/163454</link>
<description>Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems
Lindberg, Ian G.
This thesis explores the design and development of several mechanical elements relevant to two technologies Important to a global transition to green energy, hydrogen and electric vehicles. The portion of the thesis relating to hydrogen focuses on preloading mechanisms and high temperature seals, two design spaces crucial to the implementation of solid oxide hydrogen generation. Due to the high operating temperatures (600°C - 800°C), seal materials commonly used in other applications are inadequate and glass or vermiculite based seals must be used. The delicateness of these seals makes them a common failure point, and consistent application of a preloading force is key to mitigating this. The concept of a variable-bypass piston is proposed as a preloading mechanism suitable for the high temperatures present inside solid oxide electrolyzer systems, and the development of seal geometries as well as flow characterization of porous steel wool seals to enable parametric design is documented. As an alternative to current sealing methods, initial development of a composite seal utilizing materials and manufacturing methods originating in the semiconductor industry was also conducted. The final section of the thesis proposes the concept and covers initial testing of fluid transfer through a kinematic coupling, a topic of potential interest for implementing liquid pack cooling in a system of rapidly swappable batteries for electric vehicles.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163454</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots</title>
<link>https://hdl.handle.net/1721.1/163453</link>
<description>Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots
Bawa, Maheera
Skeletal muscle powers all voluntary motion in many living creatures, enabling behaviors such as walking, jumping, swimming, and flying. The field of biohybrid robotics aims to use biological actuators, such as skeletal muscle, to power adaptable robots that respond to their environment. Previous work in this field has focused on deploying 3D skeletal muscle tissues to power robotic function. In natural systems, muscles can also be organized in 2D formats to power a range of movements such as fish-like swimming and peristaltic pumping. However, long-lasting 2D cultures of skeletal muscle have been precluded by force-generating cells delaminating from their underlying substrate. Building on previous work from our lab demonstrating a method to culture contractile skeletal muscle in 2D formats, this work aims to enhance the performance of these systems by tuning substrate stiffness and topography. We show that optimizing system parameters prolongs actuator lifetime and enhances force by 100x.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163453</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production</title>
<link>https://hdl.handle.net/1721.1/163452</link>
<description>Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production
Fillon, Marie
This thesis presents the development and production of FrED (Fiber Extrusion Device), an educational manufacturing system designed to bridge the gap between theoretical instruction and hands-on practice in process control, computer vision, and smart manufacturing. Building on an existing prototype, this work focused on transitioning FrED from a proof-of-concept into a production-ready system by designing scalable workflows, improving hardware and software integration, and developing tools to ensure traceability and repeatability across builds. A major contribution of this thesis was the enhancement and implementation of a smart factory environment capable of supporting batch production. This included designing and deploying applications using Tulip Interfaces to manage inventory, guide subassembly processes, and monitor production metrics in real time. A modular SKU system and structured bin labeling framework were introduced to reduce errors, maintain version control, and support future growth. Station-specific apps were developed and refined to ensure consistent assembly and simplify onboarding across a rotating team of users. In parallel, this thesis contributed to the evaluation and refinement of a vision-based diameter measurement system using a low-cost USB camera. The system was analyzed under various operating conditions and its limitations under motion and variable lighting were quantified. Multiple image processing strategies were explored and robustness metrics were developed to inform future improvements. To ensure pedagogical relevance, the system was tested in user-facing workshops and public demo sessions. Feedback informed updates to both the assembly process and instructional content. By the end of the development cycle, the system supported the successful production of 35 complete FrED units, establishing a replicable model for small-scale manufacturing. This thesis demonstrates how modular digital infrastructure can enable scalable hardware deployment. It also highlights the practical challenges of transitioning from prototype to production and proposes tools and methods that can support broader adoption of smart manufacturing principles in learning environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163452</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection</title>
<link>https://hdl.handle.net/1721.1/163451</link>
<description>Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection
Sanghai, Rohan S.
Omni-wheels, known for enabling holonomic motion in robotic systems, often introduce vibration due to their complex geometry and multiple contact points. Unlike caster wheels with established testing standards, omni-wheels lack comprehensive characterization methods. While parallel studies by Ilkbahar [1] and Donnellan [2] explore their rolling resistance and static load capacity, a systematic analysis of vibration characteristics remains absent from the literature. This thesis presents an investigation of the vibration behavior of various omniwheel designs using a Design of Experiments (DOE) approach. A full factorial experimental design was developed, considering factors such as wheel type, rotational speed, applied load, and wheel orientation angle. Individual regression models were developed for each of six wheel types, treating operational parameters as continuous variables. Vibration levels were measured using root mean square (RMS) acceleration, derived from Fast Fourier Transform (FFT) and Power Spectral Density (PSD) analyses of accelerometer data. Results show that rotational speed consistently increased vibration across all wheel designs, while lateral motion (90° angle) consistently reduced vibration compared to forward motion. The effect of applied load varied significantly between wheel designs, with some wheels showing reduced vibration under load while others remained unaffected. Wheels DZ(1) and Vex(5) demonstrated the lowest average vibration levels, though post-test inspection revealed trade-offs with durability, including roller deformation and material degradation. Interaction effects, particularly between angle and speed, were statistically significant for all wheel types, indicating that the benefits of lateral motion are enhanced at higher speeds. This research provides a framework for optimizing omni-wheel selection to minimize vibration by developing wheel-specific predictive models that quantify sensitivities and interaction effects across various designs and conditions, improving system performance and stability. The findings highlight that wheel selection must consider not only vibration performance but also trade-offs with durability and rolling resistance, establishing vibration characteristics as a critical consideration alongside other performance metrics when selecting omni-wheels.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163451</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model-Based Planning and Control Framework for Parkour-Style Legged Locomotion</title>
<link>https://hdl.handle.net/1721.1/163450</link>
<description>A Model-Based Planning and Control Framework for Parkour-Style Legged Locomotion
Chignoli, Matthew T.
Legged robots have long been envisioned as a means of expanding robotic capabilities beyond structured environments, yet achieving high-agility locomotion remains a fundamental challenge. This thesis presents a model-based framework for parkour-style locomotion, enabling robots to execute highly dynamic maneuvers such as jumps, rolls, and flips with precision and robustness. A key challenge in planning these motions is selecting an appropriate dynamic model that balances computational efficiency with physical accuracy. To address this, a model assessment strategy is introduced to determine the simplest model capable of capturing task-relevant dynamics. Even with well-chosen models, solving long-horizon trajectory optimization problems for dynamic motions is computationally demanding. This thesis introduces graduated optimization techniques, which improve solver efficiency and reliability by generating high-quality initial guesses through progressively refined problem formulations. Additionally, a novel formulation of rigid-body dynamics algorithms for systems with kinematic loops accelerates trajectory optimization and simulation. Finally, two control strategies are proposed to execute planned motions on hardware: a model-based tracking controller for real-time adjustments and an imitation learning policy trained on optimal trajectories to enhance robustness. Extensive experiments on hardware validate the framework, demonstrating the successful execution of complex, high-impact locomotion behaviors. By integrating advanced planning, optimization, and control techniques, this work establishes a foundation for high-agility legged locomotion, pushing beyond conventional automation toward real-world, dynamic robotic movement.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163450</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester</title>
<link>https://hdl.handle.net/1721.1/163449</link>
<description>Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester
Scali, William T.
Magnetohydrodynamic (MHD) power generation presents a promising approach for harvesting energy from marine environments, offering a sustainable alternative for powering naval assets and coastal infrastructure. While energy harvesting technologies are widely used in terrestrial and aerial applications, their implementation in marine environments remains limited. This thesis explores the feasibility of an MHD Inductive Marine Energy Harvester, optimizing its design for undersea naval applications to enhance energy efficiency and reduce carbon emissions with minimized construction costs. A theoretical 2D model was developed based on Maxwell’s equations and Fourier analysis to characterize the physics governing MHD power generation in seawater. This model was extended to multiple concentric gaps on one device, refining predictions of power output under varying flow regimes. Numerical simulations using MATLAB enabled the evaluation of key parameters, including fluid conductivity, magnetic field strength, and shroud design, to optimize energy conversion efficiency. Furthermore, geographical and coastal tide analyses were conducted to determine optimal deployment locations, maximizing power extraction from natural marine currents. Economic viability was assessed through a cost-benefit analysis, comparing the energy yield per unit cost of the harvester against existing renewable energy technologies and other maritime power sources. Results indicate that under specific conditions, MHD generators can effectively supplement energy demands, reducing reliance on conventional fuel or other electrical power sources. The findings of this research contribute to the advancement of marine renewable energy technologies, demonstrating the potential of MHD induction-based harvesting as a scalable solution for sustainable power.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163449</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation</title>
<link>https://hdl.handle.net/1721.1/163448</link>
<description>Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation
Hall, Jeff
Over the last 50 years, the leading global environmental hazard has not been hurricanes, lightning, tornadoes, floods, or earthquakes, but extreme heat events. With climate models projecting an increase in the frequency, intensity, and duration of heatwaves in the coming decades this threat to life is expected to only increase. Air conditioning has been demonstrated to reduce mortality during heatwaves yet uses an order of magnitude more energy than necessary to keep a human cool. Using principles of similitude to extrapolate the capability of existing vapor compression equipment, an objective function to maintain energy balance in a human exposed to extreme heat is developed across a design space. The function shows that in a standard forced convection air conditioning system, there no opportunity to provide emergency cooling of a human due to the slow mass flow rate needed to cool air in a single stream. As such, status-quo attempts to cool humans with general-purpose air conditioning will always be an inefficient use of energy. By focusing on keeping people cool, not spaces, we propose three paths forward for critical human cooling that appropriately match the energy needs of humans: radiative cooling, liquid cooling devices, and low-mass flow air conditioning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163448</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fractured Practices: How Schooling Norms Limit Modeling Practices in Traditional Technical Thermal-Fluids Engineering Courses -- And the Possibilities Emerging through the Cracks</title>
<link>https://hdl.handle.net/1721.1/163447</link>
<description>Fractured Practices: How Schooling Norms Limit Modeling Practices in Traditional Technical Thermal-Fluids Engineering Courses -- And the Possibilities Emerging through the Cracks
Huffman, Sandra
In professional science and engineering contexts, modeling practices are frequent and diverse. To understand, analyze, and communicate, scientists and engineers simplify and distort the complex systems with which they work. This practice is known as modeling. Typically, scientists create models to predict and explain phenomena while engineers develop them to analyze and test systems, make design decisions, and predict the performance of built systems. Models can include verbal (ex. analogy, story), visual (ex. diagrams, graphs, images), and symbolic (ex. equations) representations. When scientists and engineers model, they do so expansively: pulling from different resources, combining modeling strategies, engaging in critique and iteration, and contextualizing their claims in the work of their field. This is not the case for students in technical engineering classes who are attempting to learn these skills. Traditional, lecture-based courses are the norm for introducing technical material to undergraduate engineering students. These courses typically consist of lectures, recitations, problem sets, and exams. In this type of class, students report homework and test problems as having an outsized influence on their learning approach. These problems tend to be narrow and prescribed. Colloquially known as ‘Textbook-Style’ problems, well-defined, single-solution problems are not sucient to prepare students to successfully tackle the ill-defined, multifaceted engineering problems they will face in their careers. These problems do not elicit student engagement in scientific or engineering modeling practices. Instead, they lead to inauthentic, bounded learning where students develop strategies adequate for groups of similar problems, but too narrow for use outside of the classroom. There has been significant research on innovative educational interventions and alternative problem types shown to improve classroom learning. However, educators work within established structures that resist change, leading to the perpetuation of insucient practices. The gap between textbook-style problems and the problems engineers face, therefore, exists not just in the problem type, but in the context surrounding the task. In this work, I describe and characterize the norms and practices of the classroom environment through three qualitative studies, each centered on traditional technical thermal-fluids courses. Specifically, I investigate the ways in which the development of student modeling practices are supported or undermined. I do this, in part, by adapting the theoretical framework of Figured Worlds. Originally developed by Dorothy Holland and later used in Engineering Education research, figured worlds is a situative framework that allows researchers to look at distinct, sometimes contradictory cultural worlds within the same group and activity. In the first study, I look at individual student approaches to classroom tasks in a think-aloud study, comparing their problem solving approaches and analyzing prompt-student interactions. In the second study, I analyze small groups’ modeling practices and how they are limited by the cultural practices of schooling. In the third study, through semi-structured interviews, I document instructor perceptions of their research and teaching, and discuss the misalignments within and between these contexts. Together, these works outline the mechanisms by which school practices can inhibit the development of student modeling capabilities and the role of students and instructors in perpetuating these practices. In describing student and instructor behavior and contextualizing practices that may otherwise be ascribed to misconceptions, carelessness, or ignorance, I hope to build a foundation for future research into pragmatic educational interventions for enhanced learning outcomes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163447</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly</title>
<link>https://hdl.handle.net/1721.1/163446</link>
<description>Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly
Almquist, Ethan T.
Design requirements on modern naval platforms are increasing the complexity and criticality of onboard electric plants. They form the backbone of warship operational capability and are at the heart of maritime decarbonization. Tasks such as assessing the ship's capacity in a damaged state, optimizing the mission profile of a fleet of vehicles, and evaluating broad design spaces in an efficient manner are increasingly difficult as electric network complexity increases. Traditional modeling techniques are either too computationally expensive, or lack the fidelity necessary to produce meaningful insights into the electric network's operation. Behavioral modeling bridges this gap, but is underdeveloped to support the system architectures of tomorrow's ships. This work details the advancement of behavioral modeling of electrical systems to incorporate hybrid AC/DC and ring bus architectures, the development of parallelization techniques, and SPARCS: a software package offering Shipboard Parallelized Analytics with a Rapid Configuration Simulator.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163446</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for Longevity: Service and System Innovation</title>
<link>https://hdl.handle.net/1721.1/163445</link>
<description>Design for Longevity: Service and System Innovation
Lee, Sheng-Hung
The global demographic shift toward an aging population presents complex social, economic, and systemic challenges, necessitating innovative approaches to service design, systems thinking, and financial planning. This dissertation, Design for Longevity: Service and System Innovation, examines these transformations and proposes strategies to foster a “longevity society”, a new era in society necessitating a fundamental rethinking of age and ageing to effectively harness the opportunities afforded by increased life expectancy (Scott, 2021). This research is built upon five relevant paradigm shifts: 1. from age-based to stage-based mindsets, 2. from product-driven to service-driven solutions, 3. from human-centered to humanity-centered design, 4. from circular to longevity economics, and 5. from an aging society to a longevity society. These shifts redefine the role of designers and researchers in creating adaptive, inclusive, and sustainable systems for the future. This dissertation explores how tangible artifacts, Longevity Planning Blocks (LPBs), can be employed to create effective service encounters. The research questions explore 1. how to use boundary objects (BOs) to uncover and define latent user needs, 2. how to use a mixed-method approach to analyze experiment data, 3. data-driven persona creation, and 4. the design of longevity planning services across financial planning, service innovation, and system thinking. Central to the research is a study of LPBs, BOs designed to facilitate collaborative engagement between a facilitator and 69 Boston-based participants, stratified by age, gender, pre-tax annual income, and assets. LPBs, employed in experiments, help investigate participants’ needs and concerns across various life transitions and stages. These tangible BOs facilitated informal yet insightful discussions, uncovering how individuals navigate ambiguity, make complex decisions, manage their evolving physical, mental, and social health, and perceptions about living solo. Data from in-person longevity planning experiments provided nuanced insights into the interplay of individual, societal, and systemic factors shaping longevity planning services. A mixed-methods approach integrates qualitative and quantitative techniques, including expert and user interviews, co-creation workshops, pre- and post-experiment surveys, hierarchical cluster analysis, K-means clustering for persona development, and causal loop diagrams for longevity planning service system modeling. Constructivist grounded theory and exploratory factor analysis uncover emerging themes and systemic interconnections, emphasizing the importance of adaptive services that align with changing needs and broader social infrastructures. The study introduces the notion of Design for Longevity (D4L), expanding on longevity economics and circular economy principles to address the complexities of extended lifespans. D4L highlights how evolving resources, transformative needs, and systems integrate life stages into the design of products, services, and experiences. This dissertation contributes to service innovation, financial planning, and system design by proposing actionable insights for longevity planning services. It emphasizes multi-stage life planning, intergenerational collaboration, and systemic thinking as foundational to a longevity society. This dissertation contributes a mixed-method approach, offering design practitioners a replicable, data-driven framework for persona creation applicable beyond longevity planning. Concluding with reflections on social infrastructure, community, and culture, the study calls for cross-disciplinary collaboration to address longevity planning challenges. By advancing the understanding of longevity planning and its systemic implications, this work lays a foundation for designing a future where extended lifespans are inclusive and socially engaged.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163445</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Prosthetic Leg Design Frameworks for People with an Above-Knee Amputation</title>
<link>https://hdl.handle.net/1721.1/163444</link>
<description>Integrated Prosthetic Leg Design Frameworks for People with an Above-Knee Amputation
Petelina, Nina T.
A well-fitting, high-performance prosthesis for people with a lower limb amputation can greatly improve users’ mobility and quality of life. Still, many amputees lack access to high-performance prosthetic components due to the cost and availability of continuous care. This thesis aims to design low-cost, high biomechanical performance above-knee prosthetic leg components (prosthetic foot and knee) that will result in a walking motion likely to be perceived as able-bodied after minimal acclimation time. Above-knee amputees have two common gait deviations from able-bodied and below-knee amputee gait: lack of early stance knee flexion (ESF) and delayed initiation of knee flexion (IOF) during late stance phase. These deviations are likely a result of prioritization of stability at the expense of other functions such as shock absorption and progression through stance. A preliminary perception study was conducted to investigate the acceptable bounds of gait deviation that can be incorporated into a prosthetic leg design without compromising the perception of "typical" walking. Using these results, I created the Hip Trajectory Error (HTE) framework for designing prosthetic feet specifically for people with an above-knee amputation. The HTE framework takes into account the lack of ESF by incorporating the shock absorption function of ESF within the prosthetic foot design. This is achieved by targeting able-bodied hip center motion, which is correlated with sufficient shock absorption during the stance phase. This thesis presents an optimization and performance evaluation process that resulted in a prosthetic foot structure that not only closely replicates able-bodied hip center motion but also could be manufactured for a low cost. An experimental study successfully demonstrated that the Hip Trajectory Error (HTE) framework can be used to predictively design prosthetic feet for aboveknee amputees. HTE-designed prosthetic feet enable comparable biomechanical performance to daily-use tuned and prescribed prosthetic feet within 10-15 minutes of acclimation time and without iterative multi-day fittings. Next, I proposed a method to recommend a damping coefficient for the prosthetic knee to achieve able-bodied peak knee flexion during swing phase. A range of recommended damping coefficients to achieve target peak knee flexion angle in transfemoral amputees was determined using a simple three-step framework. This framework incorporates effects from common transfemoral prosthetic gait deviations, such as slower self-selected walking speeds and delay in initiation of knee flexion during late stance. The calculated range of recommended damping coefficients was experimentally investigated and found to enable a peak knee flexion angle within two standard deviations of able-bodied peak knee flexion angle. Lastly, I created the Full Leg Optimization (FLO) framework to design the prosthetic foot and knee concurrently based on minimal inputs from the user and the prosthetist. The framework anticipates the lack of ESF and delay initiation of late stance knee flexion and uses the HTE framework to predict the orientation and location of the knee mechanism. Using this prediction, the rotational axes of the prosthetic knee can be positioned to start knee flexion at a point in late stance chosen by the prosthetist to provide sufficient stability to the user. A proof-of-concept study demonstrated the accuracy of the prediction for one user after minimal acclimation time, confirming the ability to predictively design prosthetic leg components in tandem. The FLO framework can therefore be used to predictively design a passive prosthetic leg for above-knee amputees while considering common gait deviations due to stability needs. This doctoral work demonstrates that the presented frameworks can be used to quantitatively design prosthetic feet and knees based on the needs of above-knee amputees, which could save fitting time, manufacturing cost, and improve mobility.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163444</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video</title>
<link>https://hdl.handle.net/1721.1/163443</link>
<description>Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video
Chityat, Inbar
Preterm neonates represent a vulnerable population which traditional contact-based monitoring devices are not optimized for their small size and complicated physiology. Adhesive sensors and wires can cause infections, discomfort, and impair the delivery of clinical care. Therefore, these most fragile patients could significantly benefit from remote health monitoring. This thesis establishes the foundation for a multimodal device designed for noncontact monitoring of neonates in the Neonatal Intensive Care Unit (NICU) that integrates a video camera and a radar. The device is used to estimate vital signs such as respiratory rate (RR), using both unimodal (solely video or radar) and multimodal fusion approaches that combine data from both sensors. Preliminary testing was conducted on neonatal simulator mannequins, followed by a clinical study at Tufts Medical Center NICU which collected data from 16 neonates so far (with the goal of reaching 20). The collected data was processed, labeled, and organized using image processing techniques and manual review, and then analyzed using a Video Vision Transformer (ViViT) architecture, incorporating early, intermediate, and late fusion strategies. Initial analysis was conducted on the mannequin data and the first neonatal subject. The results show that for estimating RR in neonates, the early fusion approach outperformed the unimodal methods. In movement detection, compared to human labeling, the fusion techniques achieved high accuracy and precision. To conclude, this study demonstrates that multimodal analysis has the potential to outperform unimodal approaches by improving accuracy against gold standard monitoring, particularly in challenging real-life conditions, including motion artifacts and poor lighting. This work represents a step toward more robust, non-invasive monitoring solutions for neonatal care, with implications for broader applications in remote health monitoring.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163443</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures</title>
<link>https://hdl.handle.net/1721.1/163442</link>
<description>Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures
Finlason, Katana R.
As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163442</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products</title>
<link>https://hdl.handle.net/1721.1/163441</link>
<description>Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products
Edington, David J.
In the electrification of heavy industry, rapid swappable batteries provide an effective means to minimize vehicle downtime and the cost of operation. However, to allow this technology to take hold, further development of electrical contacts that can both pass high amperage and undergo a high cycle life needs to occur. The development of these electrical contacts is a highly experimental process, and thus establishing a method and test equipment to determine the physical and electrical characteristics of these contacts over their lifetime will allow for the accelerated development of these products. This body of work serves as a design guide to establish a physical testing mechanism to assess contact resistance degradation and physical wear over the lifespan of an electric connector. Data will then be collected on initial contact prototypes to characterize their performance. With this data, designs may be iterated and improved upon in pursuit of creating a universal standard for battery swap technology on electric vehicles in heavy industry.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163441</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Hierarchical Reflexive Control Framework for Autonomous Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/163440</link>
<description>Development of a Hierarchical Reflexive Control Framework for Autonomous Robotic Manipulation
SaLoutos, Andrew
Within the field of robotic manipulation, much research focus has been placed on improving perception and planning algorithms, assuming that the actions output by these high-level planners will be easily achieved by the robot systems. However, to surpass human manipulation performance, fast and robust execution of manipulation plans is just as critical as improved perception and planning methods. In this thesis, we introduce the last centimeter problem, which states that the most difficult part of grasp execution is when less than a centimeter remains between fingertips and an object, and contact is imminent. To solve this problem, we propose a reflexive control framework, which is a manipulation control architecture that decouples low-level, high-bandwidth behaviors, which we call reflexes, from broad high-level plans. The reflexes are fast, autonomous reactions to local sensing information that are designed to add robustness to high-level manipulation plans while also reducing the necessary complexity of manipulation planning problems. To deploy our reflexes, we design hardware platforms that incorporate high-bandwidth actuation and low-latency tactile sensing, allowing us to maximize the reactive capabilities of the overall manipulation system. We validate our approach through studies on teleoperated grasping and autonomous planar grasping, which show that our reflexive controllers increase manipulation speed and robustness. Then, we perform extensive simulation studies for autonomous grasping in SE(3), conducting experiments with single objects as well as cluttered scenes, using a variety of state-of-the-art grasp planners. Our results show greatly improved grasp robustness with our reflexive controllers, across all object types and grasp planners. Further experiments show that the benefits of our reflexes persist across sets of objects that are larger, heavier, and more slippery, and with increasing magnitudes of errors in the executed grasp poses. While this thesis demonstrates that the reflexive control framework is effective at increasing grasp robustness during picking, our framework is constructed in a way that is amenable to extension to other tasks, like in-hand manipulation or constrained object placement, as well as application to more complex grippers, such as those with three or more dexterous fingers and more diverse sensing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163440</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system</title>
<link>https://hdl.handle.net/1721.1/163439</link>
<description>Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system
Kim, Beomjun
Due to the intermittency of renewable resources, achieving a high coverage of renewable generation at low cost is one of the main hurdles to realizing zero-carbon electricity generation. In this study, we analyze the roles of energy storage systems (ESS) and transmission infrastructure in the cost-optimal deployment of a renewable electricity grid in the United States. We find that storage and transmission serve distinctly different functions: transmission is useful for addressing hours-long resource lows, but only plays a supplementary role in mitigating long-duration resource lows. Conversely, storage can handle both short-duration and long-duration resource lows. These different functions are driven in part by the large spatial footprints of the most extreme long duration resource lows. Furthermore, the total cost of renewable energy in the system and the cost-determining technological components in the system are dependent on the renewables penetration toward total demand—known as the energy availability factor (EAF). When the EAF is sufficiently low, the cost of a cost-optimized system is driven solely by generation costs. For low to intermediate EAF, both generation and transmission costs are dominant factors. At high EAF, generation and storage costs become the dominant factors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163439</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wedged Vortex Generator Applications for Marine Vessels</title>
<link>https://hdl.handle.net/1721.1/163438</link>
<description>Wedged Vortex Generator Applications for Marine Vessels
Kimmeth, Jack
This thesis investigates the effectiveness of vortex generators (VGs) in reducing viscous drag in hydrodynamic applications. Initial experimental and computational fluid dynamics analyses identified wedge-shaped VGs as the optimal design for flow manipulation. Comparative testing of three wedge shaped VG sizes at 1.3 m/s revealed the most effective configuration, which was subsequently evaluated across speeds ranging from 1.0 m/s to 1.6 m/s. The results showed a viscous drag reduction of 7.9% at 1.4 m/s. These findings were extrapolated to a full-scale bulk carrier using appropriate geometric and dynamic scaling factors. Total resistance was partitioned using Holtrop-Mennen approximations, allowing the drag reduction to be realistically applied to operational conditions on a trans-Pacific route. Material and installation cost estimates were also developed. Finally, implications for propulsion efficiency, flow-induced vibrations, and cavitation are discussed, with recommendations for future self-propelled model testing to further explore these effects.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163438</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prosody in Kichwa</title>
<link>https://hdl.handle.net/1721.1/163437</link>
<description>Prosody in Kichwa
Chango Masaquiza, Soledad
This thesis investigates the prosodic system of Salasaka Kichwa, focusing on the interaction between pitch, morphosyntactic structure, and word order in both elicited and spontaneous speech. Based on data from ten native speakers of the Salasaka community, the study analyzes approximately 150 utterances using Praat and ToBI-style prosodic annotation. The findings reveal a consistent alignment between the nuclear pitch accent and the leftmost constituent of the verb phrase in neutral declarative sentences, supporting the hypothesis that Salasaka Kichwa exhibits a head-final syntactic structure. This default prosodic alignment is disrupted by the presence of focus-sensitive or interrogative morphemes such as -mi and -chu, which reliably attract the pitch peak regardless of their position in the clause. In ditransitive constructions, pitch prominence consistently targets the dative-marked argument. Accusative-marked objects also receive prominence, but only when modified; in such cases, it is typically the modifying adjective or contrastive element that bears the highest pitch. Overall, the study demonstrates that prosodic prominence in Salasaka Kichwa is not governed by syntactic structure alone. Instead, it emerges from a layered interaction between morphology, information structure, and pragmatic marking offering new insights into how prosody encodes grammatical and communicative functions in underdescribed head-final languages.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163437</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging</title>
<link>https://hdl.handle.net/1721.1/163436</link>
<description>Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging
Nguyen, David H.
This thesis presents three model predictive control (MPC) formulations for robotic table tennis swinging, addressing the challenge of generating precise, real-time paddle trajectories for dynamic ball interactions. We explore key differences in optimization structure, solver strategy, and real-time implementation, evaluating each approach through hardware experiments that measure strike condition tracking and hit success. The final controller integrates the full task of a table tennis possession by planning the return ball trajectory through the contact dynamics, and generating a swing to achieve it. This controller improves the hit rate of the system from 88.3% to 97.6% and significantly enhances strike condition accuracy and smoothness enabling control over the landing location and spin of the ball.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163436</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Certification of Deep Learning-based Dynamical System Identification</title>
<link>https://hdl.handle.net/1721.1/163435</link>
<description>On the Certification of Deep Learning-based Dynamical System Identification
Zhang, Wang
Dynamical system identification, the reconstruction of the system governing equations from observations, has been studied for decades. With the recent emergence of deep learning techniques, neural network-based parameterization enriches this classical field by offering new capabilities in modeling complex systems. While promising advances have been made, these black box models face significant challenges due to their limited interpretability and lack of physical guarantees, raising concerns about their applicability in scenarios where trustworthiness is critical.&#13;
&#13;
In this thesis, we developed a comprehensive framework to analyze, understand and learn dynamical systems. We start with a contrastive learning method to capture system invariants (i.e., conserved quantities) from trajectory observation of dynamical systems. Building on these learned invariants or known priors, we introduce a projection layer for neural networks that guarantees the preservation of physics constraints in the learned dynamics models. This two-step approach significantly improves the trustworthiness and interpretability of the traditional black-box models. On top of this, we extend this methodology to learn physically meaningful embeddings corresponding to inter-system characteristics, enabling zero-shot meta-learning capabilities for dynamical system models. Finally, we reduce the bias gap in the classical neural network-based aleatoric uncertainty estimators. We identify overestimation issues in existing variance attenuation methods and propose a novel denoising-based approach that provides more accurate estimates of data uncertainty. This method not only applies to regression tasks but also extends to dynamical system observations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163435</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots</title>
<link>https://hdl.handle.net/1721.1/163434</link>
<description>Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots
Johnston, Julie E.
The UH-60, used for troop transport, MEDVAC, and mission control, has evolved over the last 45 years from the Alpha Model to the Lima and Mike models that are currently utilized. Previous studies investigated the impact of Whole-Body Vibrations (WBV) on aviators and the resulting musculoskeletal injury, but none have investigated the efficacy of the Mike model’s Active Vibration Control System (AVCS) on reducing the impact of helicopter vibrations on musculoskeletal health.&#13;
Computational analyses of a biomechanical model using OpenSim and motion capture at varying levels of vibration was conducted. This quantifies the response of the spine and the surrounding muscles when vibratory loads are applied while positioned to manipulate the flight controls. A musculoskeletal model was developed to represent the aviator in the seated posture required to effectively manipulate the flight controls. To develop the model, the team recorded motion capture data with a pilot in a pilot test for concept validation. This data was then processed and input in the OpenSim inverse kinematics tool to determine joint angle and to demonstrate the muscle-tendon length of several muscles in the back. Unlike the initial predictions, the muscles in the right side of the back were not consistently longer than those of the left side. &#13;
A survey was also developed that builds upon previous efforts, seeking to understand the aviator’s perspective on musculoskeletal injury and prevention, with a focus on the back. Aviators are asked to describe the cause of their injury, methods of injury prevention, and recovery techniques encompassing numerous subpopulations of flight experience: Lima-majority, Mike-only, Mike-majority, and an even mixture of L/M. The data attempts to characterize the impact of the AVCS on aviator spine health. The AVCS should decrease the rate of injury by reducing the vibratory loads experienced by the aviator. This survey is unique to previous questionnaires as it focuses on the user’s perspective of differences between the two models, and the injury or pain felt by each service member.&#13;
While it was expected to see a trend of reduced injury occurrence amongst the Mike-only aviators versus those with Lima-majority flight hours, this was not the case. Injury prevalence was consistent across most populations, indicating the potential inefficacy of the AVCS. Analysis of open-ended responses, particularly from the hybrid group, provide some context for the perceived impacts of using the AVCS. Some population demographics were not represented in this survey due to the nature of the unit being surveyed, which may impact the validity of some results.&#13;
By quantifying the perceived efficacy of the AVCS as it relates to chronic musculoskeletal injury using a survey of pilot experience factors (flight hours, airframes, operating theatres, etc.), and by representing the maladaptive posture of the pilots with a computational simulation based on experimental pilot data; a full picture is developed of the risk of issue related to the near and long-term health of US Army Aviators. The aim is to expand the overall understanding of how vibration is impacting the musculoskeletal health of aviators and their perceived impact on lifelong health from the profession. The ultimate goal is to aid in the design of additional countermeasures to improve aviator spine health and to serve as a platform for optimization of systems like AVCS.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163434</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Theories for Compact, Low-energy, Clog-resistant Drip Irrigation Emitters</title>
<link>https://hdl.handle.net/1721.1/163433</link>
<description>Design Theories for Compact, Low-energy, Clog-resistant Drip Irrigation Emitters
Ghodgaonkar, Aditya
This thesis presents the derivation, experimental validation, and demonstration of new design theories for compact, low-pressure, clog-resistant drip emitters that can make drip irrigation affordable, reliable, and easier for farmers to adopt. Broad adoption of water-efficient irrigation methods such as drip irrigation is imperative to sustainably meet projected global food demand against the backdrop of diminishing freshwater resources, constrained arable land, and climate change. In drip irrigation systems, emitters are passive flow-regulating devices that are inserted into the drip tube to align with every plant. They are designed to provide a constant flow rate once they are pressurized to at least their activation pressure, thus ensuring uniform, localized irrigation of plants. However, conventional emitters directly contribute to three barriers that have limited drip irrigation adoption – high raw material-driven equipment costs, high pumping power costs associated with pressurizing all emitters in the field to their activation pressure, and gradual loss of reliability due to clogging. Compact, low-pressure, clog-resistant emitters can address these challenges, but to design them, we must model and tune their operating physics, which is centered around two complex features – a millimeter-scale tortuous passage called the labyrinth, and fluid-structure interaction (FSI) involving a flexible silicone rubber diaphragm and a micro-duct. This makes conventional design approaches relying on high-fidelity simulation software or empirical trial-and-error too expensive and time-consuming to use for the development of compact, low-pressure, clog-resistant emitters on competitive industrial timelines. This thesis addresses these challenges through three contributions. &#13;
&#13;
The first contribution presents an empirically derived hydraulic model of emitter labyrinths, which are typically the most volume-intensive feature of emitters. The model relates labyrinth flow rate to select material volume agnostic parameters, allowing designers to create compact labyrinths with desired hydraulic performance. The compact labyrinths can enable up to 10% reduction in the raw material-driven cost of drip equipment. &#13;
&#13;
The second contribution presents a 1-dimensional model of the FSI in emitters that can predict their flow rate-pressure performance in 2-3 minutes and within 8-14% error, cutting down on design cycle times by orders of magnitude. This facilitated the rapid synthesis of low-pressure emitter designs having 50-60% less activation pressure than conventional emitters, cutting pumping power costs by an estimated 18-23%. &#13;
&#13;
Together, the first two contributions can enable an estimated 18% reduction in the lifetime costs of drip irrigation, but long-term adoption requires that the emitters be clog-resistant and compatible with the current maintenance practices of farmers. To that end, the third contribution presents an experimental investigation of clogging in low-pressure emitters. The results of the investigation directly correlated the geometry of emitter hydraulic features to the critical particle size that would clog them. As a result, compact, low-pressure emitters could be designed to be compatible with the same filters and maintenance practices as current state-of-the-art products that have higher activation pressures. This was confirmed by field testing the compact, low-pressure, clog-resistant (MIT) emitters alongside commercial reference designs with their prescribed filters for nearly 1200 hours. At the end of the field test, the MIT emitters still held 90-94% of their initial flow rate, putting them on par with or better than the reference products in terms of irrigation reliability. The collective contributions of this thesis present the knowledge needed to design emitters that can make drip irrigation more affordable to adopt by farmers and demonstrate that substantial capital and operating cost reductions can be realized without sacrificing product reliability or requiring expensive changes to current farmer maintenance practices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163433</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices</title>
<link>https://hdl.handle.net/1721.1/163432</link>
<description>Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices
Hoo, Stephanie
Pop-up Satellite Archival Tags (PSATs) are a combination of satellite and archival tags used by marine biologists to collect large scale movement and behavioral data of large pelagic life for up to two years [1]. However, current commercial PSATs have an unusually high failure rate when tagged on tuna and cost upwards of $4000, making it both difficult and expensive to collect data [14]. Upon investigation, the top two failure modes of tuna-affixed PSATs have been identified as drag from movement/tissue healing and pressure cycling [14]. Current commercial PSAT manufacturers do not account for the vortices shed by fish when testing their designs— a large oversight that could account for their high failure rate [15]. The work herein determined the effects of vortex shedding on PSAT hydrodynamic behavior, used these results to inform the design of novel PSAT body shapes, and conducted a head-to-head comparison of these designs with existing commercial PSATs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163432</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats</title>
<link>https://hdl.handle.net/1721.1/163431</link>
<description>Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats
Buchanan, Maxwell Calvin
Marine corrosion presents a persistent threat to the reliable operation of U.S. Coast Guard Fast Response Cutters (FRCs). This thesis investigates hybrid cathodic protection strategies combining impressed current cathodic protection (ICCP) systems and sacrificial zinc anodes to combat corrosion on such vessels. Observing over 550 cumulative months of ICCP system data across 46 FRCs, this thesis identifies operational trends, failure modes, and unique regional behaviors. To validate observed patterns and explore failure scenarios, the study implements finite element modeling using COMSOL Multiphysics. These simulations replicate normal operation, reference electrode failure, propeller passivation, localized zinc loss, and hull coating failure for both a generic 35m hull and the FRC hull. These models emphasize how system behavior responds to material variations, temperature, and system health, offering a diagnostic framework for optimizing ICCP configurations. Field and laboratory experiments further ground the computational findings. These include shipboard hull potential surveys and analysis of zinc anode wastage across multiple cutters. Controlled experiments on nickel aluminum bronze (NAB) passivation using miniaturized ICCP test systems are explored for further study. Initial results show variation in zinc consumption and corrosion behavior depending on ICCP setpoints, with higher protection levels (-1050 mV) often correlating with reduced zinc depletion. The thesis also explores energy diagnostics onboard FRCs via non-intrusive load monitoring (NILM). A case study on the USCGC WILLIAM CHADWICK describes monitoring auxiliary machinery loads through NILM signatures and suggests expansion to critical panels and DC systems. By integrating fleet data, physical experimentation, and simulation, this thesis advances future efforts in patrol boat corrosion monitoring, ICCP optimization, and resilient microgrid management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163431</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design</title>
<link>https://hdl.handle.net/1721.1/163430</link>
<description>Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design
Burgess, Michael
In robotics, replicating the natural proficiency with which humans perform manipulation tasks has proven challenging. Modern control schemes are predominantly learning-based and thus depend heavily on data collected via teleoperated demonstrations. Humans rely on our tactile perception to perform contact-rich and dynamic manipulation tasks. By more seamlessly incorporating high-resolution tactile sensing and haptic feedback into teleoperation interfaces, we can work to create stronger demonstration data to support the development of more effective learned control policies. In this thesis, we present two contributions toward this goal. First, we develop an algorithm to estimate the compliance of grasped objects in real-time from tactile images to provide haptic feedback to remote users. This algorithm combines both analytical and learning-based approaches to better generalize across both object shapes and materials. Second, we create a 1-DoF robotic gripper design with integrated tactile sensing. Inspired by the principle of self-similarity, this gripper is designed to better conform to complex object geometries than traditional designs and more securely grasp objects of many shapes and sizes. Together, these contributions can be utilized to create robust, tactile-aware teleoperation platforms. These platforms would facilitate more effective data collection and thereby promote the development of more performative autonomous action in generalized robotic manipulation scenarios.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163430</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Complexity of Model-Based Controllers for Legged Robots</title>
<link>https://hdl.handle.net/1721.1/163429</link>
<description>Tailoring Complexity of Model-Based Controllers for Legged Robots
Khazoom, Charles
Humanoid robots promise human-like mobility, but must manage complex and often conflicting control objectives. While model-based controllers can address these challenges using online optimization, they have high computational demands. Model predictive control (MPC) provides closed-loop stability with online trajectory optimization, but achieving real-time rates is difficult for high-dimensional systems. To mitigate this limitation, most MPC implementations rely on reduced-order models (ROMs) that simplify planning but fail to capture whole-body constraints like joint limits and self-collisions. Reactive whole-body controllers (WBCs) partially address this limitation by projecting ROM trajectories onto some wholebody constraints, but these are restricted to acceleration-level constraints like friction cones and torque limits. This thesis advances humanoid planning and control through a renewed focus on model fidelity, solution accuracy ans solve times with three key contributions. First, we propose the CBF-WBC, which augments reactive WBCs with position constraints using control barrier functions (CBFs), enabling the MIT Humanoid to avoid selfcollisions with minimal computational overhead. As a result, the robot can reactively deviate from infeasible trajectories from a reduced-order MPC. Despite fast solve times below 100 microseconds, conflicts can arise between the reduced-order MPC and the CBF-WBC. To address this, we enable real-time whole-body MPC using the alternating direction method of multipliers (ADMM) to provide low-accuracy solutions at high feedback rates. The controller is reliably deployed on hardware and enables the MIT Humanoid to walk robustly on rough terrains and plan complex crossed-leg and arm motions that enhance stability when recovering from significant disturbances. While low-accuracy solutions often suffice for real-time control, we found that higher accuracy could still improve closed-loop performance if computational speed allows. Building on this insight, we propose a framework to simultaneously optimize solution accuracy and model complexity to maximize closed-loop performance. Instead of planning with a single model that is too complex or too simple, solve times can be reduced by planning over a sequence of models of reducing complexity. We extract ROMs from whole-body dynamics equations and optimize their horizons, discretization timesteps and solution accuracy using blackbox optimization. The optimizer can sacrifice model complexity for additional ADMM iterations, reducing falls by nine-fold and enabling a 2 m/s walking speed on hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163429</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits</title>
<link>https://hdl.handle.net/1721.1/163428</link>
<description>The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits
Turliuk, Jennifer
What is the net impact of artificial intelligence on climate change? Existing studies focus on AI's footprint, but few analyze AI's trade-offs. This paper develops a framework to quantify both the Greenhouse Gas (GHG) emissions and the climate change costs and benefits of AI systems, addressing the time value of carbon and the installed base of existing AI infrastructure. We examine the energy demands of AI, which are growing rapidly and threatening companies' net-zero commitments, while also analyzing AI's potential to enable emissions reductions through applications such as optimized energy systems, demand response, grid management, and electrification acceleration. This research introduces the Net Climate Impact Score (NCIS) of AI, a novel equation to calculate the net climate impact of AI technologies that considers both immediate emissions and potential future benefits, and provides a methodology for assessing AI projects holistically. We demonstrate that while current AI applications are predominantly emissions-intensive, strategic deployment focused on energy system transformation could potentially deliver net climate benefits within specific time frames and applications. However, improvements in energy efficiency and emissions reductions resulting from AI are, absent climate policy, likely to generate both direct and indirect rebound effects that could undermine the emissions reductions and reduce the climate benefits of AI. The research concludes with policy and industry recommendations that propose technological pathways that could maximize AI's positive impact while minimizing its environmental footprint.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163428</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications</title>
<link>https://hdl.handle.net/1721.1/163427</link>
<description>Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications
Pressel, Adam Jay
Switched-mode power amplifiers (SMPAs) are desired that can work across a wide range of power levels and load impedances with fast response speed while maintaining high efficiency. Such designs would be valuable for many applications including plasma generation and wireless power transfer. We introduce a new wide-range SMPA architecture that provides direct output voltage modulation, enabling it to modulate output power and compensate for resistive load variations. Dynamic frequency modulation is leveraged to address reactive load variations. The new architecture enables all the semiconductor switches to maintain zero-voltage switching across all operating conditions. Experimental results shows that the wide-range half bridge power amplifier was able to deliver a wide power range of 25 W - 95 W power range across each individual resistive load in the range of 5 Ω - 20 Ω with up to j15 Ω reactance. The maximum dc-ac efficiency is 86 with 20 Ω load and 110.5 W load power.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163427</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine</title>
<link>https://hdl.handle.net/1721.1/163426</link>
<description>Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine
Mannier, Robert B.
Harnessing marine energy offers significant potential for advancing clean and sustainable power generation. This thesis focuses on the design and optimization of a diffuser-augmented hydrokinetic turbine, supported by a tension-leg platform, to harness ocean and tidal currents for renewable energy production. By incorporating diffuser technology, the turbine’s efficiency is enhanced, increasing the coefficient of power and enabling effective energy capture even in environments with lower current speeds.&#13;
The research involves 2D and 2D axisymmetric modeling of the diffuser and turbine using Actuator Disk Theory (ADT), with tools such as Rhino and Star CCM+. Mounted on a floating tension-leg platform anchored to the seabed, the turbine is designed to exceed the Betz limit, maximizing power output and advancing offshore energy harvesting capabilities.&#13;
This thesis is solely focused on the design and optimization of the hydrokinetic turbine, providing an in-depth analysis of diffuser performance. The findings contribute to the development&#13;
of marine renewable energy technologies, promoting sustainable and efficient power generation from ocean and tidal currents.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163426</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimized Sustainable Hydrogen Generation from Liquid Metal Activated Aluminum-Water Reactions</title>
<link>https://hdl.handle.net/1721.1/163425</link>
<description>Optimized Sustainable Hydrogen Generation from Liquid Metal Activated Aluminum-Water Reactions
Kombargi, Aly
This study presents a sustainable and cost-effective method for hydrogen generation using aluminum waste, addressing both energy and environmental challenges. Activated aluminum reacts with water to produce hydrogen, heat, and aluminum oxyhydroxide (boehmite), a commercially valuable byproduct. As a safe, efficient, and cost-effective energy carrier with an energy density exceeding 20 kWh/L (8 kWh/kg), aluminum enables on-demand hydrogen production for diverse applications, including maritime transport and off-grid power systems. This research optimizes reaction kinetics to enhance hydrogen yield and rate while minimizing costs and carbon emissions.&#13;
&#13;
Activation involves coating aluminum with a gallium-indium eutectic (eGaIn) liquid metal, which disrupts the oxide layer and enables spontaneous reaction in aqueous environments. The study investigates seawater as an ionic medium for eGaIn eutectic agglomeration and reuse. However, chlorine binding slows the reaction, which was countered using high-temperature operation and catalytic enhancement. Adding 0.02 M imidazole accelerated the reaction 60-fold, enabled 92% eutectic recovery, and achieved 99% of the theoretical hydrogen yield.&#13;
&#13;
Environmental conditions significantly influence reaction efficiency. Increasing seawater temperature from 20°C to 90°C enhanced reaction rates 44-fold, aligning with Arrhenius Law. Isochoric reactions at high pressure were tested to simulate deep-sea vehicle environments using onboard hydrogen reactors fueled by aluminum and surrounding seawater. Results showed a 33% yield increase at 6 MPa (586 m depth) compared to atmospheric pressure, primarily due to surface tension effects that reduce hydrogen bubble size, improving aluminum-water contact at higher pressures.&#13;
&#13;
A life cycle and cost analysis identified an optimized production scenario with a carbon footprint of 1.45 kgCO2eq/kg H2, meeting green hydrogen standards. Major contributors include recycled aluminum use and processing, and the eGaIn alloy; but eutectic recovery and thermal energy reuse further reduce emissions. Using scrap aluminum and recovering byproducts, hydrogen production costs are estimated at $9.2/kg. Additionally, reselling boehmite (market price $2.5/kg) could generate revenue 5.6 times greater than input costs, significantly improving economic viability.&#13;
&#13;
To demonstrate scalability, a modular hydrogen reactor was developed and directly integrated with a commercial generator, reliably producing 400W of power from on-demand, 99% purity lab-tested hydrogen. The envisioned application is a fully integrated aluminum recycling system that utilizes aluminum waste and seawater to generate hydrogen, thermal energy, and boehmite. This approach advances clean energy technology by providing a scalable and economically viable hydrogen production pathway.&#13;
&#13;
Beyond its direct application in underwater technologies, this optimized reaction can support energy-intensive operations such as heating, desalination, transportation, industrial hydrogen production for refining and fertilizer synthesis, stationary energy systems for off-grid power, and renewable energy storage. Its versatility strengthens energy security and decarbonization efforts while offering a cost-competitive alternative to conventional fuels, positioning it as a key enabler of a sustainable energy future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163425</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Venture Capital and Corporate Finance</title>
<link>https://hdl.handle.net/1721.1/163424</link>
<description>Essays in Venture Capital and Corporate Finance
Paine, Fiona
This thesis is three chapters. In the first chapter, I study the impact of restricting foreign venture capital investments for national security reasons. Countries have increasingly been using economic policies to further geopolitical and national security goals. Thus far, economists have focused on studying tariffs and subsidies despite a broader range of economic tools actually being implemented. How costly are these other policies and what are their effects on capital markets, investment, and the economy more broadly? In this paper, I examine a 2018 U.S. law (FIRRMA), which expanded the government’s ability to review and block transactions on national security grounds to include venture capital (VC) investments by foreign investors. I use the passing of FIRRMA, its differential impact on specific VC industries, and the role of Chinese investors in U.S. venture capital to study whether foreign investment screening impacts capital supply. I find that FIRRMA had a negative effect on capital supply in impacted industries due to two factors: 1) the specialization of VC investing (such that the substitution of outside capital into impacted industries is low) and 2) networks in VC investing (there are spillovers to domestic syndication partners in impacted industries). I further find that the change in capital supply is costly, leading to lower innovation by startups. I introduce a novel way of measuring innovation early in the life of a startup using text from startup websites. I use this measure to show there is a selection effect where VCs give first round funding to less innovative startups after FIRRMA. Finally, in a case study of the biotechnology industry, I show that impacted startups suspend drug projects at higher rates, and in particular their risky projects. In the second chapter, joint with Johnathan Jensen, we study municipal cyber risk. Cyber attacks are estimated to cost billions of dollars per year. However, cyber risk is hard to study since companies rarely disclose hacks and don’t share information on cyber security investment. This paper takes a novel approach by looking at municipal hacking. We use a dataset of municipal ransomware attacks merged with hand collected IT investment data and municipal bond data. We find that lower IT investment predicts hacking. Furthermore, following a ransomware attack, municipal bond yields fall by 13 basis points and IT investment as a share of total town expenditure increases by 23 basis points. We investigate potential channels leading to decreased yields post hacking. We find evidence that being hacked reduces cyber risk by disciplining municipalities to move closer to the optimal level of IT spending. The third chapter investigates the impact of firm data collection and analysis of collected data on the riskiness of firm cash flows. I use a scraped data set of the third party resources loaded on firms’ websites as a measure of firm data collection and analysis practices. I find that firm u se of less effective web analytics is as sociated with an increase in the variance of sales, inventory, and both fixed and variable costs. This effect is de spite a lack of change in the level of these variables. Looking at the effect of treatment on the treated, there i s higher profit and sales variance during times of higher uncertainty. I use differences in web analytics technology and a change in their relative effectiveness as my identification strategy. As a case study of a large negative demand shock, I look at differences in fi rm reactions to COVID-19 based on their web analytics usage.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163424</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coevolution of Small Business Strategy and Regulation: A Mixed-Methods Study of United States Craft Breweries</title>
<link>https://hdl.handle.net/1721.1/163423</link>
<description>Coevolution of Small Business Strategy and Regulation: A Mixed-Methods Study of United States Craft Breweries
Rixey V, Eppa
This dissertation asks how do small firms overcome regulatory constraints despite powerful opposition? Significant research has documented the nonmarket strategies of large, multinational firms seeking to benefit from and capture regulatory systems. However, despite the historically important role of small and medium-sized enterprises (SMEs) in the economic and civic structures of the US, there is much we do not know about whether and how they attempt to exert their own influence in regulatory environments. To explore this, the US beer industry was selected as a strategic research site where SMEs have had a range of successes and failures in developing policy influence. In the late 1970s, the US beer industry rapidly consolidated to less than 100 breweries, but today, with the rise of small, craft breweries, there are over 9,000 breweries in the US. Over 7,000 of these focus on direct-to-consumer (DTC) sales, which were explicitly or practically illegal in all 50 states in 1980. How did this market and regulatory transformation take place and why did some states significantly change their policies to support small brewers while others did not? Two studies were conducted to explore this, an in-depth qualitative study of a single state and a mixed-methods comparative study of six states. The single state was selected for variation in policy outcomes over time and at local levels. Through interviews and archival research, it was revealed that craft breweries engaged in a bottoms-up approach, through which individual firms venue shift downwards, from state to local regulators, to successfully ease state-level constraints. In local public hearings, individual entrepreneurs blended local corporate social responsibility (CSR) with an experimental approach to corporate political activity (CPA) that motivated city-based regulators to challenge state-level restrictions on DTC business models. To understand how this process of developing policy influence unfolds in the absence of local regulators, the national trade associations in the beer industry were analyzed and six states where the state has near exclusive control over alcohol regulations were selected for further analysis. Controlling for a range of factors through a cross-sectional database led to a geographically proximate sample of six comparable states with wide variation in the favorability of policies and the number of breweries per capita. A unique dataset of over 5,000 legislative updates on proposed and enacted federal and state policy changes was supplemented with archival and interview data to assess policy influence. The conventional approach described in the literature, collective action via a trade association, was important but often insufficient. Each state had a functioning trade association representing most craft breweries, but sustained policy influence was observed only in states where full-time leaders of these associations understood the political landscape and developed policy partnerships to tilt the odds in their favor. Policy partnerships entailed legislation alleviating regulatory constraints while also including new provisions that ensured long-term alignment among the partners. Taken together, these studies reveal the vital importance of collective action extending beyond the focal industry for SMEs to develop policy influence at the local or state level.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163423</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation</title>
<link>https://hdl.handle.net/1721.1/163422</link>
<description>On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation
Trono Figueras, Renato
The reduction of sonic boom loudness to within acceptable limits is a crucial factor for the viability of supersonic aircraft. This thesis presents a computational framework for simulating sonic boom propagation using an output-based adaptive, higher-order finite element method. The research employs the Variational Multiscale with Discontinuous Subscales (VMSD) method, integrating Continuous Galerkin (CG) and Discontinuous Galerkin (DG) features, referred to as VMSD-BR2. This approach leverages static condensation to manage computational cost while utilizing DG stabilization techniques for enhanced stability and adjoint consistency. A key component of this work is the application of the dual weighted residual (DWR) method for output error estimation, which in turns drives the mesh optimization process. The method’s efficacy is validated using smooth solutions for the viscous Burgers equation and the adjoint PDE for a volume output functional. Additionally, artificial viscosity is incorporated via a shock sensor PDE approach to handle shock presence, with necessary corrections applied to the DWR error estimate. The VMSD-BR2 method is applied then to a real-world scenario solving the augmented Burgers equation, which models the propagation of sonic booms. The results include the pressure perturbation field, adapted meshes, ground-level B-SEL filtered pressure, and perceived loudness at ground, demonstrating the method’s practical application.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163422</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>C. elegans as a Platform for Multimodal Neural Data Integration</title>
<link>https://hdl.handle.net/1721.1/163421</link>
<description>C. elegans as a Platform for Multimodal Neural Data Integration
Simeon, Quilee
Systems neuroscience has traditionally been fragmented into investigations at discrete levels of organization, creating methodological and conceptual gaps that hinder unified understanding of neural function. This thesis examines the nematode Caenorhabditis elegans as a platform for integrating diverse neural data modalities, offering a pathway to bridge these gaps. The hermaphrodite C. elegans, with its completely mapped connectome, optical transparency, genetic tractability, and stereotyped nervous system of only 302 neurons, presents an opportunity for comprehensive measurements across multiple dimensions of neural function. The review is organized around three fundamental neural data modalities accessible in C. elegans: (1) molecular genetic profiles, (2) network connectivity, and (3) neural activity dynamics. Historically studied in isolation, these complementary data types are increasingly being bridged through technological and computational innovations. We examine experimental advances enabling whole-nervous-system measurements of these modalities, as well as data standardization efforts and computational frameworks for cross-modal integration. While understanding the relationship between neural activity and behavior remains a fundamental goal of systems neuroscience, this thesis focuses on neural data acquisition and integration rather than behavioral analysis, which has been extensively covered elsewhere.1 We conclude with some original proposals to overcome current limitations in multimodal data acquisition and synthesis, and suggest future directions toward a holistic understanding of how molecular components, network connectivity, and cellular physiology collectively give rise to neural function in C. elegans. These integrative approaches establish a roadmap that may eventually scale to more complex nervous systems and advance our understanding of neural computation across species.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163421</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World</title>
<link>https://hdl.handle.net/1721.1/163420</link>
<description>A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World
Sutcliffe, Douglas
Fusion energy presents a promising solution for current global decarbonization goals. This thesis presents an adaptable model for evaluating mineral sufficiency in the global deployment of fusion power. Using the ARC Magnetic Confinement (MC) Deuterium-Tritium (D-T) fusion concept as a framework, this research integrates mineral usage estimates from the International Energy Agency (IEA) with MIT Energy Initiative’s (MITEI) energy production forecasts by generation technology. Using MITEI’s $2,800/kW cost scenario for fusion power generation, the model situates the demand for fusion-critical minerals within the broader context of growing mineral needs driven by the clean energy transition, and offers specific, quantitative insights into mineral sufficiency risks. The study finds that beryllium will face significant shortages solely due to fusion demand, with resource exhaustion projected to occur within 40 years. When accounting for additional demands from Electric Vehicles (EVs), battery storage, and transmission infrastructure, chromium and nickel are projected to exhaust economically extractable reserves within 21 to 35 years at current prices. The research further reveals that for nine of the thirty elements evaluated, over 50% of production is concentrated in a single country, and for half of the minerals China is the largest producer, introducing geopolitical risks. Notably, at just 13 kg per reactor, the demand for Rare Earth Elements (REEs) is not exposed to a significant risk, even without the top producing country. The research also surfaces current reactor designs and strategies which could help mitigate each identified risk.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163420</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.</title>
<link>https://hdl.handle.net/1721.1/163419</link>
<description>As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
Espinal, Michael A.
Foams, widely used in packaging, insulation, protective gear, and medical implants, are versatile materials but mechanically inefficient due to their bending-dominated microstructure, leading to an exponential loss of stiffness and strength at low relative densities. Architected materials address this limitation through engineered microstructures that achieve near-linear scaling of properties with relative density. However, truss- and plate-based designs suffer from stress concentrations, while shell-based architectures, though more mechanically efficient, remain highly sensitive to defects and are challenging to fabricate at scale via additive manufacturing. Spinodal architected materials, derived from scalable spinodal decomposition processes, offer a promising alternative with aperiodic, double-curvature microstructures that enhance mechanical efficiency at low relative densities. Nevertheless, their behavior beyond the elastic regime remains largely unexplored. This thesis investigates the nonlinear mechanics of spinodal architected materials by combining a comprehensive experimental dataset with computational modeling. A total of 107 unique morphologies were fabricated and subjected to uniaxial compression along three principal directions, resulting in a dataset of 321 stress-strain curves. Morphologies were generated via simulated spinodal decomposition, allowing controlled variation of anisotropy. Explicit finite element simulations, validated against experimental data, revealed that plastic energy dissipation dominates the large-strain mechanical response. To quantitatively link local morphology to global mechanical behavior, we introduce the Normal Participation Factor (NPF) — a scalar geometric parameter that captures the alignment between surface normals and the loading direction. We demonstrate that the NPF is a material-agnostic proxy for equivalent plastic strain and is linearly correlated with the total energy dissipated during deformation. Combining insights from both experiments and simulations, we establish the NPF as a first-order predictive tool for mechanical behavior under large strains, enabling structure-property predictions without reliance on costly simulations or extensive experimental testing. Altogether, this work lays the foundation for developing finite-strain structure-property relationships in spinodal architected materials, advancing their potential for real-world applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163419</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Tendon-Driven Robotic Systems: From Climbing Robots to String Actuators</title>
<link>https://hdl.handle.net/1721.1/163418</link>
<description>Advancing Tendon-Driven Robotic Systems: From Climbing Robots to String Actuators
Poon, Ryan Joseph Mar
Tendon-driven mechanisms provide a range of benefits for robotic systems, particularly by allowing actuators to be mounted at the base of a manipulator and reducing its inertia. This thesis explores two projects that exploit and advance tendon-driven mechanisms: a wheeled-grasping hybrid climbing robot with modular tendon-driven grasping arms and a hybrid twisted-winching string actuator. Called CLIMR (Cabled Limb Interlocking Modular Robot), the novel climbing robot adapts to columns of varying diameters by adding or removing modular arm links. CLIMR also features capabilities like self-locking (the ability of the robot to stay on the column without power), autonomous grasping, and rotation around the column axis. Mathematical models describe conditions for self-locking, vertical wheeled climbing, and complete grasping of a column. Simulations and experimental results validate the proposed models. The insights from CLIMR are then extended into general design strategies for future developments of similar hybrid climbing robots, focusing on methods to inform design decisions and assess metrics such as adaptability. Ultimately, this work provides a comprehensive framework for designing hybrid climbing robots, highlighting the potential of autonomous solutions for environments where climbing tall structures is critical. Stemming from this climbing robot work is a novel actuator system combining a twisted string actuator (TSA) with a winch mechanism. Relative to traditional hydraulic and pneumatic systems, TSAs are compact but face limitations in stroke length and velocity. This TSA-winch system overcomes these constraints without risking overtwisting by providing both high displacement winching and high force twisting modes. The design features a rotating turret that houses a winch and a worm gear transmission driven by a through-hole drive shaft. Models are developed for the combined displacement and velocity control of this system. Experiments validate the open loop model as well as the closed loop model, which uses a conductive string feedback controller with a gain scheduling and control effort allocation scheme. For specific cases that require large displacement winching followed by high force twisting over several repeatable cycles, an alternate design sacrifices complete string state control and replaces a motor with passive automatic clutches to achieve a seamless transition between modes triggered by the string load. The models of the clutch torque thresholds for this version of the actuator are verified by experiments. Overall, this research contributes to the development of more versatile and efficient actuation systems for tendon-driven robotic applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163418</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordination of distributed energy resources for a reliable,&#13;
resilient, and affordable decarbonized grid</title>
<link>https://hdl.handle.net/1721.1/163417</link>
<description>Coordination of distributed energy resources for a reliable,&#13;
resilient, and affordable decarbonized grid
Jagadeesan Nair, Vineet
Rapid decarbonization of the power grid is essential to meet climate goals by reducing emissions and enabling sustainable electrification of sectors like transport and heating. This requires shifting from centralized fossil-fuel generation to variable renewables like wind and solar. The grid must also adapt to a growing number of small-scale, distributed energy resources (DERs) at the edge, such as rooftop solar, batteries, electric vehicles, and heat pumps. This thesis focuses on modeling, optimizing, and coordinating DERs to enable a flexible, resilient, and affordable grid. First, it proposes a novel hierarchical local electricity market for low and medium-voltage distribution grids. This structure enables DER participation through decentralized and distributed optimization, respecting grid physics while preserving privacy and scalability. The market is applicable to both balanced and unbalanced radial grids using two different convex relaxations and power flow models. Grid services are also priced based on duality theory. Numerical simulations show improved dispatch efficiency, reliability, voltage regulation, and lower retail electricity rates. Second, the thesis applies game theory and mechanism design to extract flexibility from autonomous, strategic DER owners. A repeated Stackelberg game with incomplete information and intertemporal constraints yields equilibrium pricing with closed-form solutions. Third, a distributed decision-making framework is developed to coordinate DERs for grid resilience. It mitigates cyber-physical attacks and outages, ranging from 5 to 40% of peak load, using local flexibility and grid reconfiguration, extensively validated through both software and hardware-in-the-loop simulations. Finally, the thesis addresses DER hosting capacity. New algorithms are developed that co-optimize the siting and sizing of diverse DERs under uncertainty using Monte Carlo sampling, stochastic programming, and k-means clustering for scenario reduction. Results show that intelligent DER coordination can defer grid infrastructure upgrades and support greater renewable integration and electrified demand growth. Together, these contributions provide analytical and simulation tools to improve the planning and real-time operation of future distributed, low-carbon power grids.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163417</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Development and Utilization of Tandem Fluency in Human-Exoskeleton Interaction</title>
<link>https://hdl.handle.net/1721.1/163416</link>
<description>The Development and Utilization of Tandem Fluency in Human-Exoskeleton Interaction
Koo, Bon H. (Brandon)
There is strong demand for portable technologies that enhance human power output while maintaining safety and range, not only in defense and industry but also in aerospace. Exoskeletons and other wearable powered devices have been proposed as solutions, but a major barrier to adoption is the issue of “fluency”: a combination of metrics representing the seamlessness of human-robot interaction. Most current exoskeleton systems, especially for non-cyclic motions, disrupt user intent and movement, often offering no benefit, or even causing harm by increasing discomfort and injury risk. This lack of fluency is frequently linked to poor intent recognition and absence of predictive control. To address this, we propose developing a human motion prediction system and studying its impact on fluency in exoskeleton-like devices and related human-centered technologies in real-world applications. We introduce an expanded metric “tandem fluency” based on conventional fluency, tailored for evaluating human-robot interaction (HRI) systems where human and robot agents are kinematically synchronized to perform functional tasks. We then develop a proof-of-concept and a functional deep neural network (DNN) capable of detecting human motion intent and predicting motion trajectories in advance using biosignals such as surface electromyography (sEMG). In parallel, we build and test prototype exoskeleton hardware with both single and multiple degrees of freedom. Finally, we conduct human trials with the full closed-loop tandem human-exoskeleton system to evaluate the impact of motion prediction-based control on tandem fluency. The results show that classification and regression prediction of human motion prior to initiation of physical motion is possible and can have performance necessary for practical application of this information, the prediction can be generated not only prior to the physical motion initiation, but often even before the full electrical activation of the primary agonist in many motions, the DNN is robust to variations in sensor hardware and input formatting, and furthermore the use of this prediction in the controls of a tandem robot system has potential to improve tandem fluency by positively affecting both subjective experience and objective/metabolic results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163416</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery</title>
<link>https://hdl.handle.net/1721.1/163415</link>
<description>Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery
Co, Dominic Lim
By 2050, the United Nations estimates that 68 percent of the world’s population will live in cities, with 90 percent of that growth concentrated in rapidly urbanizing informal communities across Africa, Latin America, and Asia. In these contexts, informality, defined as unregulated commerce, adaptive reuse of space, incremental construction, and self-organized infrastructure, shapes the everyday choreography Jane Jacobs called the “sidewalk ballet.” Yet because governments rarely collect census-grade data on such activity, informality remains poorly documented and weakly understood. This thesis introduces a transferable computational framework to formalize informality by transforming street imagery into an auditable taxonomy of informal street-level elements, activities, and practices. The framework is tested in two contrasting districts, i.e. District 1 and District 5 of Ho Chi Minh City, where sidewalks are highly contested by vendors, pedestrians, and regulators. The contribution of this thesis is two-fold. First, this thesis contributes a three-stage pipeline for classifying sidewalk informality. Using Seesaw (Moll et al., 2022), a CLIP-based feedback loop retrieves and soft-labels candidate scenes. This is followed by manual verification and fine-tuning a lightweight ResNet on binary categories (e.g. stationary vs mobile vendors, etc.). Compared to the zero-shot model Qwen-VL-Max, the fine-tuned ResNet delivered more balanced performance (precision/recall: 0.62– 0.78) and better handled nuanced, context-sensitive distinctions. In contrast, Qwen-VL-Max favored recall and object salience but struggled with subtle or spatial cues like mobile vs. stationary setups. Second, this thesis also developed a taxonomy and annotated dataset of informality which was used to reveal spatial inequities in sidewalk use. By converting curbside complexity into structured, updateable categories, the framework enables planners to recognize the adaptive value of informal practices, target genuine hazards, and design interventions for more equitable urban planning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163415</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization</title>
<link>https://hdl.handle.net/1721.1/163414</link>
<description>Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization
Dickerman, Matthew F.
The maritime shipping industry, responsible for approximately 3% of global greenhouse gas emissions, faces growing pressure to achieve net-zero emissions by 2050 under the International Maritime Organization (IMO) framework. Alternative fuels such as liquefied natural gas, ammonia, and methanol present challenges related to energy density, infrastructure, safety, and cost. Nuclear microreactors offer high energy density, zero operational emissions, and multi-year endurance, but require coordinated regulatory development and stakeholder engagement for commercial adoption.&#13;
&#13;
This thesis evaluates the feasibility of integrating microreactors into container ship designs employing electric propulsion and standardized intermodal logistics. Holos-Quad microreactors are selected based on their modular architecture, transportability, and compatibility with marine operations. Detailed ship concepts are developed for Feeder, Panamax, and New-Panamax classes, accompanied by a phased fleet development strategy.&#13;
&#13;
Economic modeling compares the lifecycle costs of conventional and microreactor-powered ships, incorporating capital expenditures, operating costs, financing assumptions, and carbon pricing. Fleet-level analysis indicates that microreactor-powered ships can achieve comparable or improved profitability while eliminating nearly 44 million metric tons of CO2e emissions across a ten-ship fleet. Sensitivity analyses confirm the robustness of these results across a wide range of future scenarios.&#13;
&#13;
By integrating stakeholder analysis, technical feasibility assessments, and economic modeling, this research establishes a commercially viable framework for zero-emission nuclear-powered shipping, offering a scalable pathway toward sustainable maritime operations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163414</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A magnetic levitation testbed for development of real-time control frameworks applied in fusion</title>
<link>https://hdl.handle.net/1721.1/163413</link>
<description>A magnetic levitation testbed for development of real-time control frameworks applied in fusion
Lee, Yehoon
This thesis presents the development of a magnetic levitation device as a hardware-in-theloop platform to be used for research in Control and Data Acquisition frameworks applied to fusion experiments. Specifically, the testbed is aimed to demonstrate distributed, modular control using a plasma control system framework being developed at the Plasma Science and Fusion Center at MIT. This framework integrates a real-time control framework, MARTe2, and a data management framework, MDSplus, to provide platform flexibility and robust data management for rapid prototyping of control systems. Both frameworks are widely used individually in fusion experiments worldwide. The magnetic levitation setup is centered around a single electromagnet coil which levitates a permanent disk magnet from above. Implemented with the integrated MARTe2/MDSplus framework, the controller, actuator, and sensors are distributed over the network. With the magnetic levitation testbed, this thesis achieves three objectives: 1. formulation of a physicsbased model of the system, 2. development of a controller in a modular, networked framework, and 3. training and implementation of learning-based methods within the framework. First, a state-space model for single-axis magnetic levitation is formulated based on theory and refined with magnetic field measurements. A feedback controller is then developed and implemented with MATLAB Simulink. Afterwards, a vision-based observer is developed to estimate position and tilt of the levitated magnet. Pose-image datasets are auto-labeled using fiducial markers and are used to train a convolutional neural network. Finally, the trained network will be applied in system identification of the final controlled system. Through the process of system development, this thesis proposes that the integrated MARTe2/MDSplus framework is robust in performing real-time control of a networked system, and its structural modularity is advantageous for developing and testing learning-based models.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163413</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation</title>
<link>https://hdl.handle.net/1721.1/163412</link>
<description>A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation
Nieves, Charmaine
Bacterial cell genetic engineering is fundamental for research aiming to learn more about bacterial species for a broad range of applications. One method of intracellular delivery of foreign DNA during the genetic engineering process is the use of electroporation to create pores along the bacterial cell membrane. Current methods for assessing pore formation do not directly measure cell permeabilization or enable same-day assessment. In this thesis, a novel fast-screening protocol combining SYTOX green, microfluidics, and fluorescence imaging is evaluated for its capability to assess multiple conditions for cell permeabilization within a single day. By imaging bulk suspensions of post-electroporated cells stained with intracellularly delivered SYTOX, multiple electroporation conditions can be rapidly screened for cell permeabilization. This fast-screening protocol utilizes standard microbiology equipment and low-cost microfluidic imaging chambers, lowering the barrier to adoption and significantly reducing experimental time compared to conventional protocols involving foreign DNA delivery. Importantly, by decoupling permeabilization assessment from foreign DNA uptake, this method isolates the effect of membrane permeabilization from confounding factors such as restriction-modification systems. As a result, it provides a more accurate qualitative and quantitative assessment of bacterial membrane disruption. This approach enables same-day evaluation of electroporation conditions regardless of bacterial growth rate, potentially accelerating the optimization process for intracellular delivery in gene editing applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163412</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes</title>
<link>https://hdl.handle.net/1721.1/163411</link>
<description>Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes
Chong, Jinger S.
Accurate human motion prediction with uncertainty estimation is essential for safe and efficient human-robot collaboration, where robots must anticipate and react to human movements in real-time. Existing methods either rely on sophisticated techniques that demand extensive training data and sacrifice interpretability, or use simpler approaches like conventional Gaussian Processes (GPs) that fall short in performance. To address this gap, we propose a novel structured multitask variational GP framework that explicitly incorporates joint dependencies to reflect human kinematics. We further enhance this framework by integrating angular velocity constraints, which improve the physical plausibility of predictions. The addition of constraints alone yields up to a 66% reduction in mean angle error (MAE) and an 84% improvement in the likelihood of predicting ground truth (NLL), outperforming standard GP baselines across a wide range of motion types and prediction horizons. Among model variants, our structured GP with constraints offers the best tradeoff—achieving MAE within 1.1–2.6% and NLL within 0.001–0.012 of the best-performing model, while maintaining significantly lower overconfidence rates (OCR), particularly at short horizons where the independent GP model OCR reaches nearly 45%. These results underscore the importance of incorporating structure and context in human motion prediction, demonstrating that even simpler probabilistic models like GPs can achieve substantial performance gains when augmented with such information.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163411</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics</title>
<link>https://hdl.handle.net/1721.1/163410</link>
<description>Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics
Roy, Ronak
The high-level control algorithms that are responsible for achieving dynamic locomotion in legged robots depend on accurate torque production for matching real-life performance with simulated performance. To achieve accurate torque production, actuators must run high-bandwidth, low-level torque control. Developing high performance low-level controllers requires accurate actuator models. This thesis covers the physical model of a Permanent Magnet Synchronous Motors (PMSM), a very common type of actuator in dynamic robotics. This thesis details the derivation of the PMSM linear model, how to adapt the model dependent on the physical construction of a real motor, and the implementation of FieldOriented Control (FOC) to achieve torque control. This thesis also describes a novel design of a high-precision dynamometer, which allows a motor to be coupled with an impedance and a torque sensor in order to accurately characterize the torque production characteristics of the motor. Using this dynamometer and other experimental setups, this thesis validates the model and determines parameters for multiple different actuators. Finally, this thesis proposes an augmented PMSM model that considers the nonlinear saturation behavior of the motor, validating the principle with hardware experiments, and demonstrates a nonlinear torque model and gain-scheduled current controller that improve torque tracking performance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163410</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels</title>
<link>https://hdl.handle.net/1721.1/163409</link>
<description>Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels
Ilkbahar, Kayra B.
Omnidirectional wheels (omni wheels) are a type of wheel technology similar to caster wheels but capable of simultaneous longitudinal and lateral motion, making them suitable for holonomic motion applications. In recent years, their popularity has grown substantially in areas such as educational robotics, autonomous vehicles, and industrial automation. Despite their similarity to caster wheels in both function and application, omni wheels are a much less mature technology and few agreed-upon standards exist for their design and testing. This thesis covers the design of a test procedure and its requisite test apparatus to characterize the rolling resistance of omni wheels across various test conditions, and focuses specifically on the mechanical and electrical design of an apparatus which can measure the rolling resistance coefficient of omni wheels while modulating their load weight, travel angle, and travel speed.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163409</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Structural Approach to Measuring Time-varying Risk&#13;
Aversion</title>
<link>https://hdl.handle.net/1721.1/163345</link>
<description>A Structural Approach to Measuring Time-varying Risk&#13;
Aversion
von Turkovich, Nick
Non-homothetic preferences have the potential to rationalize important asset pricing facts including time-varying risk premia and business cycle movements in asset prices (e.g., Campbell and Cochrane (1999)). This paper offers a structural approach to measuring time-varying risk aversion. Motivated by the literature on consumption commitments (e.g., Flavin and Nakagawa (2008), Chetty and Szeidl (2016), Chetty, Sandor, and Szeidl (2017)), I develop a model in which investors have nonseparable preferences over housing and nonhousing consumption, and investors must consume a minimum amount of housing each period. Non-housing consumption is assumed to be flexibly chosen. The key insight is that the intratemporal optimality condition between the two goods reveals information about the surplus consumption ratio, a key variable driving risk aversion. A cointegrating relationship between relative quantities and prices allow us to identify the elasticity of intratemporal substitution and measure surplus housing consumption. Using aggregate U.S. consumption data from 1959 to the present, the measured surplus consumption ratio demonstrates clear business cycle fluctuations, rising during expansions and falling during recessions. Consistent with the theory, this measure also predicts future excess returns.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163345</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities</title>
<link>https://hdl.handle.net/1721.1/163344</link>
<description>Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities
Epstein, Andrew
The commonwealth of Massachusetts has ambitious decarbonization goals enshrined in law and has been establishing the regulations to achieve them. Through its Department of Public Utilities regulatory rulings, the state has required local gas and electric utilities to pursue decarbonization not only by reducing the emissions of their electric supply but also by actively supporting gas load reduction. The residential heating sector dominates this effort, with programs like MassSave incentivizing customer adoption and now MA DPU 20-80-B&#13;
requiring gas utilities to demonstrate that they have sufficiently evaluated the possibility of non-pipeline alternatives, including but not limited to electrifying customers instead of reinvesting in the gas system for all future gas investments.&#13;
&#13;
This paper looks at a single Massachusetts utility, National Grid, and evaluates where its customers are switching to electric heat and which mechanisms are driving current adoption. It further evaluates where geographically National Grid could invest in electrification instead of replacing gas investments under the new 20-80-B order. In doing so it establishes a model for cost benefit calculations related to prospective NPA projects. This paper then examines the degree to which ongoing electrification efforts are aligned with one another. Finally, this paper explores concerns that the process of electrification might be regressive, leaving behind those who cannot afford to electrify their systems and leaving them to pay ever-increasing prices as the full gas system is paid for through rates from a decreasing population of consumers. In evaluation of such concerns, it determines the geographic correlation between ongoing decarbonization efforts and communities already facing housing burden.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163344</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables</title>
<link>https://hdl.handle.net/1721.1/163343</link>
<description>Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables
Salata, Elizabeth
Electrical connection errors arise frequently during manufacturing. It is optimal to repair these errors during General Assembly Trim Line stations when the wiring harnesses are still exposed and easily accessible. However, the time required to locate the cause of the errors often exceeds Trim station cycle times, so most repairs are delayed until after General Assembly. Due to the implications of shutting down the line, this results in significantly higher repair times, scrap costs, and resources. To overcome these challenges, there is clear evidence supporting the use of Augmented Reality (AR) tools to innovate and streamline manufacturing processes. This master's thesis identified deficiencies in the current standard operating procedure for addressing errors and used a human-centered design approach to develop a novel error diagnostic process using an AR overlay technique to pin point on the vehicle where the problem lies. This thesis also conducted an experiment to assess the performance, success rate, and perceived cognitive load of the two processes. The data collected from the experiment provided sufficient evidence that the diagnostic process developed for this thesis reduces the elapsed time to locate the connection error by 75% with a statistically significant reduction in overall perceived cognitive load. The likelihood of widespread adoption of the AR overlay process was assessed from an estimate of further AR hardware development, safety considerations in automotive manufacturing environments, and the level of enthusiasm of all stakeholders who were consulted for this research project.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163343</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers</title>
<link>https://hdl.handle.net/1721.1/163342</link>
<description>A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers
Sirgo, Alex
As the demand for data centers continues to grow, so does their energy consumption, making it increasingly important to develop sustainable and cost-effective strategies for powering them with carbon-free electricity. This thesis explores a techno-economic modeling framework that evaluates combinations of solar, wind, and battery energy storage systems to assess their ability to meet a data center’s electricity demand with on-site renewable generation. The model fills a gap in current literature by focusing on real-time energy matching using co-located infrastructure, rather than traditional off-site procurement methods like power purchase agreements and renewable energy credits.&#13;
&#13;
Using real-world weather and price data, the simulation calculates hourly generation, storage behavior, and grid interactions across a 20-year period. A financial model then calculates the levelized cost of energy (LCOE) for each system configuration. Results show that wind energy generally provides the lowest-cost renewable supply option, while hybrid solar and wind configurations improve renewable penetration. Battery storage plays a key role in shifting excess generation to periods of undersupply, but its economic viability depends on system sizing. Across different system configurations, renewable penetration ranged from 31.3% to 97.8%, while LCOE varied from $27.5/MWh to over $100/MWh, illustrating the trade-offs between cost and grid independence.&#13;
&#13;
By providing a structured analysis of the trade-offs between renewable penetration and cost, this research offers insight into how data centers and other energy-intensive facilities can design dedicated carbon-free energy systems. The findings underscore the importance of balancing resource diversity and storage investment to achieve decarbonization goals while maintaining economic viability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163342</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnostics in Additive Manufacturing Using Image-Based Machine Learning</title>
<link>https://hdl.handle.net/1721.1/163341</link>
<description>Diagnostics in Additive Manufacturing Using Image-Based Machine Learning
Varma, Arun Alejandro
Additive Manufacturing (AM) is a vital capability in the aerospace industry. Blue Origin manufactures a substantial share of engine parts via metal AM. To meet growing customer demand, the company must dramatically increase engine throughput and, thus, 3D prints. Blue Origin has identified non-destructive testing (NDT) – particularly, Computed Tomography (CT) scanning – as an unsustainable bottleneck to expanding AM capacity. Not only is this process expensive, but, critically, there are not enough aerospace-grade CT machines in the world to support projected throughput. Without process change, meeting customer demand will soon become impossible. Yet, these scans provide important quality control, and any reduction in NDT must be accompanied by assurances of engine part integrity. This thesis introduces a diagnostic system that safely alleviates the bottleneck, and further yields insights that end-stage NDT alone cannot provide. The proposal is a machine learning system that evaluates the manufacturing process itself, examining layer-by-layer photographs captured during printing. It is predicated on two hypotheses: (1) These images, considered together, provide a synthetic 3D illustration of the build process; and (2) Machines can be taught to assess these process signatures dependably. The resulting system provides rich diagnostics. It achieves near-perfect anomaly recognition – 100% when using conservative defect thresholds. Operationally, the system can (at minimum) safely enable a 37-54% reduction in NDT, translating to millions of dollars in annual cost savings. In practice, this reduction will likely be higher. The system further enables early process intervention and a more data-driven approach to manufacturing intelligence. This work turns what began as an unsustainable bottleneck into an opportunity for enhanced quality control, process intelligence, and long-term manufacturing resilience.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163341</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Substitution among Social Media Platforms: Evidence from App Tracking Panel Data</title>
<link>https://hdl.handle.net/1721.1/163340</link>
<description>Substitution among Social Media Platforms: Evidence from App Tracking Panel Data
Lagutina, Rina
This thesis explores a novel approach to competitive intelligence in the social media ecosystem by leveraging external mobile panel data to study substitution dynamics. It focuses on contextspecific behavioral patterns to identify which platforms compete for user attention in given situations. Using mobile app session data from April 2023 for approximately 5,000 users, the analysis segments usage into three behavioral contexts – morning, evening, and at-home sessions – and characterizes user-app interactions through descriptive statistics. K-means clustering is applied to identify archetypes of usage behavior across these contexts, revealing distinct patterns such as quick-check habits, deep content consumption, and intensive texting. By comparing app usage profiles across contexts, the study uncovers shifts in how and when platforms are used, highlighting subtle substitution dynamics. To validate the findings, the study analyzes app usage during service outages, testing if potential substitutes see increased engagement when a competing platform is unavailable. These insights offer a richer, contextaware framework for product managers to uncover indirect competition and tailor platform strategies to specific user behaviors. Limitations include reliance on behavioral data without content-level detail, mobile-only focus, and demographic skew in the panel.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163340</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh</title>
<link>https://hdl.handle.net/1721.1/163339</link>
<description>Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh
Bari, Md Mustabeen Ul
This thesis develops a systems-based policy framework for Generative Artificial Intelligence (GenAI) implementation in developing economies, with specific application to Bangladesh. While GenAI's potential productivity and labor market impacts are well-studied in developed economies, limited research addresses the challenges faced by developing countries positioned primarily as technology consumers rather than producers. The research employs causal loop diagramming to map interactions between five critical policy domains: human capital development, digital infrastructure, data sovereignty, sectoral stimulus, and governance.&#13;
&#13;
The resulting framework identifies four primary reinforcing mechanisms that can accelerate adoption and three balancing mechanisms related to labor displacement. To validate the framework, the research analyzes contrasting implementation approaches from India and Egypt, demonstrating the importance of cross-domain synergies in effective policy design.&#13;
&#13;
Applied to Bangladesh, the framework yields a dual-entry strategy focusing on healthcare and education sectors as initial implementation domains, leveraging the country's strategic advantages while addressing resource constraints through a consortia-based implementation model that creates institutional resilience. The thesis contributes both a reusable conceptual toolkit for analyzing GenAI policy in resource-constrained settings and an initial context-anchored roadmap for Bangladesh. Future research should refine the framework through longitudinal case studies while developing more detailed, stakeholder-engaged implementation plans for Bangladesh that include concrete budget allocations, institutional responsibilities, and measurable outcomes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163339</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Value of Digitizing Manufacturing Environments</title>
<link>https://hdl.handle.net/1721.1/163338</link>
<description>The Value of Digitizing Manufacturing Environments
Briggi, Conor S.
There is significant variability and dispute around the value of digitally transformed manufacturing environments and no single methodology is broadly accepted. The variability stems from time-dependencies, implementation effectiveness, and the dynamic environments digital solutions are deployed in. However, an accurate accounting of this value is essential to company strategic planning. The research outlines how to approach this variability, cost parameters to consider, primary sources of value generation, and best practices for implementing Smart Factories. A tool that addresses these issues was successfully developed and deployed at Stanley Black &amp; Decker, helping the company to assess performance of the digitization efforts and tailor the delivered solution to optimize manufacturing performance. Results from this tool showed a positive expected return on investment and are provided to contextualize efforts in similar areas.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163338</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance</title>
<link>https://hdl.handle.net/1721.1/163337</link>
<description>Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance
Lorente Anon, Carla
Predictive maintenance plays a critical role in industrial operations by enabling organizations to detect potential equipment failures before they occur. However, while sensor data can identify anomalies such as excessive vibration or temperature fluctuations, technicians often struggle to efficiently diagnose and resolve the root causes of these alarms. This research presents a generative AI-powered chatbot designed to enhance the root cause diagnosis process in predictive maintenance by leveraging multimodal retrieval-augmented generation (RAG) and advanced AI-driven troubleshooting capabilities.&#13;
&#13;
The chatbot integrates multiple functionalities to support maintenance teams in resolving alarms quickly and accurately. Its time series analysis module processes real-time sensor data, identifying abnormal patterns and guiding users through a structured troubleshooting workflow. The retrieval-augmented generation (RAG) engine allows the chatbot to retrieve and synthesize relevant troubleshooting information from technical manuals, historical maintenance records, and structured knowledge bases, ensuring that technicians receive precise, grounded outputs. Additionally, the chatbot supports multimodal interactions, enabling users to upload images, audio, and video for more comprehensive diagnostics. By analyzing uploaded images of damaged components, transcribing spoken maintenance reports, and processing video footage of equipment malfunctions, the chatbot enhances problem identification and resolution.&#13;
&#13;
Another key feature of the chatbot is its interactive guided conversation system, which enables multi-turn dialogues that refine diagnostics dynamically based on technician input. Instead of providing static troubleshooting steps, the chatbot continuously adapts its responses to ensure that users receive the most relevant recommendations as the diagnostic process unfolds. To maintain safety and reliability, the system incorporates AI guardrails, filtering inappropriate or irrelevant inputs while ensuring that generated responses align with best practices for industrial maintenance.&#13;
&#13;
An evaluation framework is proposed to assess the chatbot’s effectiveness, focusing on retrieval accuracy, response relevance, and diagnostic efficiency. Initial results demonstrate approximately 30% reduction in diagnostic time, highlighting the chatbot’s potential to improve maintenance workflows, reduce downtime, and enhance technician productivity. This research underscores the transformative role of multimodal generative AI in predictive maintenance and lays the foundation for broader industrial applications. As a result of this work, a patent has been filed to protect the novel architecture and methods developed. Future work could focus on expanding retrieval capabilities to include video, integrating intelligent task automation for dynamic work order generation, and refining alarm prioritization using adaptive risk-based assessments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163337</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying over Individual Concepts</title>
<link>https://hdl.handle.net/1721.1/163336</link>
<description>Quantifying over Individual Concepts
Kobayashi, Filipe Hisao
Since Montague (1973), it has been assumed that quantificational DPs must, at least sometimes, be analyzed as quantifiers over individual concepts (i.e., functions from indices of evaluation to individuals). Because the domain of individual concepts is significantly greater than that of individuals, the challenge has always been how to properly constrain quantification over these objects. This dissertation proposes a solution to this problem by developing a novel theory as to how NPs are shifted from predicates of individual into predicates of individual concepts. The idea is that, since NPs are interpreted as restrictors, the nature of this shifting mechanism will constrain quantification. The proposal bears a strong resemblance to the analysis of interrogative clauses of Karttunen (1977): suitable predicates of individual concepts are built from the interaction of a type-shifting operation and existential quantifiers. In three cases studies, I show how this theory can solve old and new puzzles: (i) the different interpretations of sentences of the form ‘[Det NP] changed’ (Nathan 2006); (ii) two ambiguities in the interpretation of concealed questions (Heim 1979); and (iii) question intruders, a novel puzzle concerning the interpretation of both embedded interrogative clauses and concealed questions.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163336</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Within ‘Reason’: A Study of Normative Language</title>
<link>https://hdl.handle.net/1721.1/163335</link>
<description>Within ‘Reason’: A Study of Normative Language
Watkins, Eliot
What do we mean when we say that someone ought to do something? What do we mean when we say that someone has a reason to do something? What do we mean when we say that someone has more reason to do one thing rather than another? The primary goal of this project is to shed light on these semantic questions.&#13;
&#13;
The picture of normative talk that I develop across this thesis has a distinctive feature: the notion of a reason (roughly, a fact that counts in favour of something) isn’t given any fundamental role to play. Instead, the meanings of ‘ought’, ‘must’ and ‘is a reason for…’ are all understood in terms of something gradable – they’re understood in terms of facts about how much reason there is for something to be done.&#13;
&#13;
Chapter One focuses on deontic modals like ‘ought’ and ‘must’. I argue that the standard semantics for these expressions is incompatible with the idea that facts about what you ought to do are connected with facts about what you have reason to do. I develop a new semantics for deontic modals which builds-in the connections between ought and reasons from the ground up.&#13;
&#13;
Chapter Two centres on ‘reason’. We use ‘reason’ as both a count noun (as in “there is a reason for you to read my dissertation”) and a mass noun (as in “there is some reason for you to read my dissertation”). I argue that the best semantics for ‘reason’ will treat the mass form as fundamental. ‘Reason’ is a predicate of a particular kind of state – the state someone is in when they have reason to do something. I turn this result into an argument against the enduringly popular idea that count noun reasons are normatively fundamental.&#13;
&#13;
Chapter Three stays with reasons. According to a standard picture, normative reasons do not extend beyond the boundaries of agency. If something isn’t an agent – if it can’t do rudimentary practical reasoning – then there can’t be normative reasons for it to do one thing rather than another. I argue that this standard picture gets things totally wrong: there are reasons for non-agents to be certain ways and do certain things. We must not analyse what it is to be a reason by appealing to distinctively agential capacities.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163335</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lessons from CP in Passamaquoddy and beyond</title>
<link>https://hdl.handle.net/1721.1/163334</link>
<description>Lessons from CP in Passamaquoddy and beyond
Grishin, Peter Nicholas
This thesis explores various aspects of CP morphosyntax in Passamaquoddy-Wolastoqey and other Algonquian languages and their consequences for broader generative syntactic theory. It consists of two parts: one investigates clause typing and clause size in Passamaquoddy, and the other investigates the properties of a CP-layer agreement marker, the peripheral suffix, across Algonquian. In addition, a lengthy background chapter offers new data and insight on the correct analysis of the inverse and obviation in Passamaquoddy and across Algonquian.&#13;
&#13;
Part I studies the distribution of the three morphologically-distinguished non-imperative clause types in Passamaquoddy: the independent, the conjunct, and the subordinative. I argue that their distribution in complementation and coordination structures falls out naturally from their structural size, following the work of Wurmbrand and Lohninger (2023) and Bjorkman (2012, 2013). I support this conclusion by carefully investigating how each clause type interacts with Ā phenomena like wh movement and long distance agreement, showing that various complex interactions between these syntactic processes are derivative of clause size: independent clauses and conjunct clauses under epistemic attitudes are large, phasal CPs, conjunct clauses under direct perception predicates are smaller, non-phasal CPs, and subordinative clauses are bare TPs.&#13;
&#13;
Part II studies two unexpected properties of peripheral agreement across Algonquian: (i) its preference for agreeing with third persons, no matter their syntactic role (found in all Algonquian languages); and (ii) its preference for agreeing with the least local goal (found in languages like Passamaquoddy, Ojibwe, and Wampanoag). I explore the consequences of these typologically unusual properties for the theory of φ agreement and provide an analysis of the cross-Algonquian variation we find in peripheral agreement (building on Xu 2021, 2022). I argue that Algonquian third person preference forces us to accept Nevins (2007) and Trommer’s (2008) conclusion that third person cannot be underspecified relative to first and second person, even in the syntax (contra Preminger 2019a and van Alem 2023). Additionally, I show that Algonquian lowest preference doesn’t force us to give up on standard locality properties of Agree, and argue for an analysis under which C agrees with all matching accessible goals, but only spells out the last Agree relation—Expone Outermost—building a parallel with similar ideas in the domain of multiple case assignment. Finally, I capture cross-Algonquian variation in peripheral agreement by varying the specification of the peripheral agreement probe and varying which arguments are able to shift out of the VP phase.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163334</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>*ABA in Multidimensional Paradigms: A MAX/DEP-based account</title>
<link>https://hdl.handle.net/1721.1/163333</link>
<description>*ABA in Multidimensional Paradigms: A MAX/DEP-based account
Zompì, Stanislao
The last decade and a half has witnessed intensive research into *ABA universals—generalizations such as “If a nominative and the corresponding dative have the same exponent, then the corresponding accusative has that exponent, too” (Caha 2009; Smith et al. 2019). Most existing work on these universals has only focused on one ‘paradigm column’ at a time, by checking a given paradigm’s nominative singular, accusative singular, and dative singular, for example, with no heed to whether any of the relevant exponents would also show up in that paradigm’s nominative plural, accusative plural, or dative plural. However, some recent literature has pointed out that inspecting full paradigms is crucial to our understanding of *ABA, because some classic accounts that derive *ABA column-internally turn out to also make predictions as to what may or may not happen across columns, and those predictions are often incorrect (cf., among others, Christopoulos &amp; Zompì 2022). In this dissertation, I review those incorrect predictions and replace them with a novel generalization specifically concerning *ABA-like effects in multidimensional paradigms. I then set out to derive this generalization by setting up an exponent-selection system wherein exponents may both be underspecified and be overspecified with respect to their exponenda, with each of these departures from a perfect match being penalized but not necessarily fatal. In particular, I explicitly implement this intuition in optimality-theoretic terms, via a strict-domination ranking of violable Max and Dep constraints (cf. in particular Ackema &amp; Neeleman 2005; Wolf 2008; Müller 2020), and I show that the resulting system, while restrictive enough to derive the desired generalization, is also powerful enough to afford a natural account of some notoriously unnatural (‘morphomic’) exponent distributions in the inflection of Germanic pronouns and Romance verbs.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163333</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intangible Investments and the Accrual-Cash Flow Relationship</title>
<link>https://hdl.handle.net/1721.1/163332</link>
<description>Intangible Investments and the Accrual-Cash Flow Relationship
Soares, Fabio
This paper investigates whether the weakening negative relationship between accruals and operating cash flows can be attributed to the immediate expensing of intangible investments under current accounting standards. Building on the framework proposed by Green et al. (2022), I examine how the mechanical capitalization of intangible investments affects the accrual-cash flow relationship across firms with varying R&amp;D intensities. I show that the capitalization impacts the relationship in unexpected ways, indicating that the proposed rationale cannot fully explain the observed trend. I further exploit differences in accounting treatments under IFRS and US GAAP to test whether increased capitalization of intangible investments through development costs strengthens the relationship. I find that the relationship is significantly more negative under IFRS than US GAAP, independently of R&amp;D expenditure, suggesting that increased capitalization alone does not explain the differences. Additionally, the positive trend observed for high R&amp;D firms in both standards highlights that increased capitalization is insufficient to reverse the weakening trend. These results challenge the view that current accounting practices are the primary cause of the weakening accrual-cash flow relationship and underscore the need for further exploration of alternative explanations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163332</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits</title>
<link>https://hdl.handle.net/1721.1/163331</link>
<description>Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits
Zeng, Arnaud
This thesis examines how sports leagues and media companies are evolving to better connect with Generation Z, a generation whose changing expectations and habits – on-demand and socially driven – are reshaping the landscape of sports consumption. With fewer Gen Z fans watching full games on traditional mediums, the industry is being pushed to rethink its approach, adapting not just how content is delivered, but also what kind of content is created. Through a combination of expert interviews and industry data, this paper looks at the rise of short-form content, the importance of digital-first platforms, and the growing influence of storytelling&#13;
through influencers or behind the scenes. It also explores how new competition formats are exploiting what it now means to be a fan. The goal is to understand how the sports ecosystem is adjusting to remain relevant to its youngest audience.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163331</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation</title>
<link>https://hdl.handle.net/1721.1/163330</link>
<description>Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation
Xi, Tiffany
In the footwear industry, the speed in which footwear designs reach the market impacts the ability for a company to accurately meet the demands of its customers as the probability of consumer preferences changing increases with time. This research investigates the impact of incorporating metal additive manufacturing capabilities into the product creation process of a major athletic footwear company. The study aims to determine whether and under which applications metal additive manufacturing can increase the speed at which footwear designs reach the market, while maintaining or improving the desired product quality.&#13;
    A case study approach was employed, focusing on the development of rubber outsole molds using metal additive manufacturing technology. The study compared two process flows that excluded and included metal additive manufacturing. The case study evaluated these processes based on the speed of the development process and the quality of the produced footwear samples. The footwear sample quality was measured against production-equivalent samples obtained from the company’s manufacturing partner. The results demonstrated that incorporating metal additive manufacturing capabilities led to a reduction in the time required for mold design and fabrication. This speed advantage was primarily attributed to the ability to directly fabricate detailed textures into the mold, eliminating the need for outsourced etching processes.&#13;
    The visual quality of samples produced did not fully match those created by the company's manufacturing partners but were sufficient for initial sample development. Importantly, the traction properties were comparable to those of the manufacturing partner's samples, indicating that the functional quality of the samples is adequate for product development purposes.&#13;
This research provides valuable insights into the potential of metal additive manufacturing in accelerating footwear product development. Future work recommendations include exploring advanced modeling and design software and examining the impact of machine parameters on build quality. The findings of this study have implications for both the footwear industry and other sectors considering the integration of metal additive manufacturing technologies into their product development processes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163330</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach</title>
<link>https://hdl.handle.net/1721.1/163329</link>
<description>Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach
Zhang, Yu (Sherry)
As impact investing increasingly aspires to drive systemic change, the question of how to evaluate such efforts remains underexplored. Traditional evaluation approaches often grounded in linear causality and program-level outputs, and struggled to capture the complexity, interdependence, and emergent nature of systemic transformation. This thesis investigates how systemic investing can be evaluated by integrating systems thinking, evaluation theory, and investing practice. It develops a conceptual framework of thirteen hallmarks that characterize systemic investing evaluation across dimensions such as time horizons, stakeholder engagement, cross-sector collaboration, and capital dynamics. Drawing on 46 real-world cases, the research identifies 112 indicators to make these hallmarks observable and assessable in practice. To support practical application, the thesis also introduces an AI-assisted scoring tool that automates the evaluation of narrative content using the framework. Together, these contributions aim to support more reflective, adaptive, and system-aware evaluation practices in the emerging field of systemic investing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163329</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow</title>
<link>https://hdl.handle.net/1721.1/163328</link>
<description>Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow
Sen, Shweta
Conventional strategies for container load planning (CLP) predominantly emphasize maximizing container utilization, which can result in suboptimally-timed inventory arrival, increased inventory holding costs, and downstream operational inefficiencies. Using a real-world case study from a global footwear and apparel retailer, this research formulates a novel multi-objective mixed-integer linear programming (MOMILP) model that jointly considers container utilization, transportation and storage costs, and timing accuracy of inventory delivery. The proposed model utilizes a branch-and-bound algorithm to evaluate numerous load configurations, assessing the impact of different load rules and weighting parameters on transportation performance metrics and inventory flow. Results highlight the cruciality of prioritizing delivery precision in transportation management decisions, demonstrating that solely maximizing volume utilization can adversely affect overall cost efficiency when downstream inventory storage and operational requirements are considered. This work also provides a process map of load planning activities and identifies targeted operational improvements, such as consolidation bypass and purchase order (PO) partitioning, that can enhance inventory flow smoothness, reduce transportation costs, and support more responsive logistics networks. Collectively, this work extends existing CLP methodologies by incorporating delivery timing and inventory storage considerations into load planning decisions, offering practical enhancements for logistics optimization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163328</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principles and Practices of Gap-Closing Investing</title>
<link>https://hdl.handle.net/1721.1/163327</link>
<description>Principles and Practices of Gap-Closing Investing
Kapor, Mitchell
This thesis examines the principles and practices of gap-closing investing, a distinctive model of early-stage venture capital investing that seeks to close gaps in access, opportunity, and outcomes for low-income communities and communities of color. Developed by Dr. Freada Kapor Klein and Mitchell Kapor through Kapor Capital, gap-closing investing integrates social impact objectives with a performance-driven investment strategy. The thesis combines historical analysis of socially responsible investing and impact investing with case studies of venturebacked startups to situate gap-closing investing within a broader tradition of values-based finance. It traces the ethical roots of impact investing to religious traditions, the emergence of socially responsible investing funds in the 1970s, and the formalization of impact investing terminology in the late 2000s. Gap-closing investing is distinguished by a developmental approach to startup growth, a redefinition of founder selection criteria emphasizing “distance traveled” over pedigree, and a focus on mitigating structural barriers through capital allocation. The thesis critically compares gap-closing investing to Corporate Social Responsibility (CSR) and Environmental, Social, and Governance (ESG) frameworks, arguing that gap-closing uniquely centers systemic impact as a core investment goal rather than a secondary consideration. The findings challenge the perception that impact investing is inherently concessionary, using performance data from Kapor Capital’s portfolio to demonstrate that intentional, equity-focused investing can produce both superior financial returns and measurable social outcomes. Gap-closing investing is presented as both a pragmatic investment strategy and a model for using venture capital to drive systemic change toward a more inclusive economy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163327</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Model for Battery State of Health</title>
<link>https://hdl.handle.net/1721.1/163326</link>
<description>Predictive Model for Battery State of Health
Garza Lozano, Catalina
As battery energy storage systems (BESS) become critical components of grid infrastructure, accurately assessing their State of Health (SoH) is essential for optimizing performance, reducing costs, and ensuring contractual compliance. This thesis investigates the development of accurate, real-time SoH estimation models for utility-scale battery storage sites operated by NextEra Energy. Current SoH measurements—derived from annual capacity tests and Battery Management System (BMS) data—are often inaccurate or infrequent, leading to either over- or under-augmentation and resulting in financial inefficiencies. &#13;
&#13;
To address this gap, four state estimation models were developed and evaluated: an Unscented Kalman Filter (UKF), a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), a multitask RNN, and a Delayed Reinforcement Learning (DRL) model. Each model uses operational data—such as voltage, current, temperature, and State of&#13;
Charge (SoC)—to estimate degradation patterns and predict SoH at the rack, lineup, and site levels. Their outputs were compared against ground-truth capacity test results from a large-scale battery storage site.&#13;
&#13;
The DRL model demonstrated the highest accuracy, achieving a deviation of only 1.6 months compared to capacity test data, significantly outperforming existing BMS readings and the other three models. These findings underscore the value of advanced machine learning techniques in enabling proactive maintenance, optimized augmentation scheduling, and cost-efficient storage site management. This research offers a scalable framework for real-time SoH estimation across large fleets of battery storage assets and contributes to the broader goal of improving grid reliability through smarter energy storage management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163326</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems</title>
<link>https://hdl.handle.net/1721.1/163325</link>
<description>Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems
Sowards, Steffan
This work presents a study on the development and application of data-driven operational efficiency and throughput Key Performance Indicator (KPI) modeling for Robotic Mobile Fulfillment Systems (RMFS). Through rigorous analysis of extensive operational data from an operating RMFS, we demonstrate the efficacy of machine learning approaches in predicting and optimizing the performance of complex warehouse automation systems. The research employs advanced techniques, including gradient boosted bagged tree ensembles and AutoML, to capture complex input interactions and provide parallel predictions across multiple KPIs. Our models achieve a mean R² value of 0.7838 across all templates and KPIs, with particularly strong performance in our top performing metric across templates (mean R² of 0.9660).&#13;
&#13;
The study introduces a novel framework for feature engineering and selection, emphasizing actionable inputs while excluding intermediate variables to enhance model interpretability and practical utility. We validate our approach against novel operating conditions, demonstrating the models’ ability to generalize to unseen scenarios. Interpretability techniques, including SHAP analysis and permutation feature importance, provide valuable insights into system behavior and key performance drivers.&#13;
&#13;
This research establishes a generalizable framework for leveraging data-driven modeling in predicting and optimizing brownfield warehouse automation system behavior. The developed approach offers significant potential for enhancing operational decision-making, system design, and strategic planning in the rapidly evolving field of e-commerce fulfillment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163325</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Bayesian Entrepreneurship: Evaluating and Commercializing Unconventional Ideas</title>
<link>https://hdl.handle.net/1721.1/163324</link>
<description>Essays on Bayesian Entrepreneurship: Evaluating and Commercializing Unconventional Ideas
Gius, Luca
This dissertation investigates a fundamental challenge complicating the evaluation and commercialization of entrepreneurial opportunities: some ideas are valuable precisely because not everyone recognizes their worth. The first essay analyzes barriers against the commercialization of contrarian ideas. Researchers working with unpopular AI algorithms tend to commercialize their work only after a successful public evaluation. Those who clear this hurdle subsequently achieve better entrepreneurial outcomes. A regression-discontinuity analysis shows that this partly reflects status quo bias: for unpopular methods only, winning a contest serves as a certification, channeling disproportionate resources to the winner while equally strong near-misses remain sidelined. The second essay finds that greater judge disagreement in venture competitions predicts higher future success, especially for more distinctive startups. The third essay shows that skewness in idea value exacerbates asymmetric information in markets for ideas. Using data from auctions for digital businesses, I illustrate how this can explain why online marketplaces for ideas have struggled to emerge despite lowering transaction costs: informational frictions severely depress bids and prevent high-value digital startups from trading. The final essay, coauthored with Alfonso Gambardella and Scott Stern, introduces the archetype of Homo Entrepreneuricus: an entrepreneur who deliberately tests subjective beliefs through structured experimentation to navigate uncertainty.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163324</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity</title>
<link>https://hdl.handle.net/1721.1/163323</link>
<description>Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity
Jin, Ce
In this thesis, we investigate the fine-grained complexity of various algorithmic problems with an additive flavor, including 3SUM, Subset Sum, and their close relatives. We explore their connections to various areas, such as graph algorithms, discrete optimization, combinatorial pattern matching, and computational geometry. Our new results include improved algorithms and conditional lower bounds for a wide range of problems, answering multiple open questions from the literature:&#13;
&#13;
• Conditional lower bounds for graph problems: We prove new lower bounds for 4-Cycle Listing and Approximate Distance Oracles conditioned on the 3SUM Hypothesis. As a key intermediate step, we show a fine-grained reduction from 3SUM to the special case of 3SUM where all pairwise sums of input numbers are distinct.&#13;
&#13;
• Combinatorial pattern matching: We design improved algorithms for Text-to-Pattern Hamming Distances, Pattern Matching with Wildcards, and Geometric Pattern Matching, by drawing connections from 3SUM and sparse convolution.&#13;
&#13;
• Knapsack-type problems: We obtain a pseudo-polynomial time algorithm for 0-1 Knapsack with (conditionally) near-optimal dependence on the maximum item weight, an improved approximation scheme for the counting problem #Knapsack, and improved exponential time algorithms for the total search problem Pigeonhole Equal Subset Sum.&#13;
&#13;
In order to obtain these results, we employ and develop techniques based on convolution algorithms and their extensions, as well as classic tools from additive combinatorics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163323</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Approach to Component Code Optimization for Wound Closure Portfolio</title>
<link>https://hdl.handle.net/1721.1/163322</link>
<description>Systems Approach to Component Code Optimization for Wound Closure Portfolio
Dubelier, Madeline
Product portfolio management involves strategically analyzing, optimizing, and expanding a company’s offerings to maximize value and align with business goals. While companies often focus on portfolio expansion to meet evolving customer needs and gain market share, product deletion is frequently overlooked, leading to code proliferation and undermining operational efficiency. Effective variety management often requires input from stakeholders across the supply chain, yet few published methods take this approach. This work presents a systematic supply chain management approach to portfolio optimization using a case study from Johnson &amp; Johnson MedTech. The case study is on pledgets, key components in non-absorbable suture systems. Recent pledget product quality issues exposed the need for a systematic approach to reducing component variety and operational efficiency. A current-state analysis addressed multiple dimensions of complexity. The evaluation combined qualitative and quantitative data and led to a five-stage optimization strategy. The proposed future state portfolio reduces component variety by 60%, guided by three constraints: continue to meet customer needs, protect competitiveness, and reduce manufacturing complexity. This method provides a replicable model for rationalizing legacy portfolios in the medical device industry.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163322</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Content Creator Conduct</title>
<link>https://hdl.handle.net/1721.1/163321</link>
<description>Content Creator Conduct
Du, Jason
This thesis investigates the behaviors of content creators. The first study examines whether musicians learn from the success of earlier songs when they create new ones, finding that tracks on a musician’s next album tend to be more similar to the songs that performed better on their current album. The second study explores the cultural, social, and psychological aspects of content creation by tracing first-person singular pronoun usage in contemporary music, revealing geographic, temporal, and genre-based patterns. The third study analyzes the association between content creators' learning tendencies and the explainability of previous outcomes, showing that news editors are more likely to resemble previous popular headlines when those outcomes are more explainable. Collectively, these studies facilitate understanding of the factors that underlie content creation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163321</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimization-Based Approach to Efficient Clearance Inventory Allocation</title>
<link>https://hdl.handle.net/1721.1/163320</link>
<description>An Optimization-Based Approach to Efficient Clearance Inventory Allocation
Perez Munoz, Karla Mayra
Allocating clearance inventory effectively remains a critical challenge in retail environments characterized by short decision cycles, fluctuating demand, and operational constraints. Decisions made during the clearance period are particularly impactful, as they determine the final opportunity to recover value from unsold products before they lose relevance or perishability. This thesis presents a mathematical optimization model designed to support the redistribution of discounted articles across a network of stores, with the objective of maximizing revenue while satisfying constraints related to stock availability, store capacity, and observed demand at the article-size level.Developed in collaboration with a leading global fashion retail company, the model was built to align with existing business processes and balances analytical rigor with simplicity in implementation. The model incorporates business-defined parameters and is tested using real operational data from selected distribution centers. It demonstrates significant improvements over the current practice of single-item allocation and addresses the computational challenges posed by the high dimensionality of real-world retail problems. By implementing efficient iterative procedures and demand-scaling mechanisms, the model ensures tractability while capturing the complexity of the business environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163320</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gas Network Preparations for Networked Geothermal</title>
<link>https://hdl.handle.net/1721.1/163319</link>
<description>Gas Network Preparations for Networked Geothermal
Serbent, M. Patrick
As Massachusetts pursues its goal of achieving net-zero carbon emissions by 2050, the transition from natural gas to sustainable thermal energy solutions presents both opportunities and challenges for its 1.6 million natural gas customers. This thesis investigates the potential of networked geothermal systems as a viable alternative to traditional natural gas infrastructure, with a focus on leveraging existing gas network replacement programs, such as the Gas System Enhancement Plan (GSEP), to facilitate this shift. A four-phase methodology —encompassing site selection, model development, cost analysis, and business case formulation— evaluates the feasibility of integrating high-density polyethylene (HDPE) piping into leak-prone pipe replacement efforts as a preparatory step for future geothermal or hydrogen applications. Findings suggest that HDPE offers potential material and inventory cost advantages over medium-density polyethylene (MDPE), with added flexibility for low-carbon conversions, though significant upfront costs and regulatory uncertainties remain barriers. An example site already scheduled for main replacement work showed a 6% total increase in cost for the project based on the change in pipe from MDPE to HDPE. This work underscores the potential of aligning infrastructure modernization with climate goals, offering a framework for utilities like National Grid to navigate the energy transition in cold, densely populated regions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163319</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology</title>
<link>https://hdl.handle.net/1721.1/163318</link>
<description>Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology
Siddiqui, Sameed Muneeb
This thesis explores the dual imperatives of enhancing biosecurity and accelerating outbreak response. The research addresses two key areas. First, the thesis analyzes the implications of a national nucleic acid synthesis screening framework on outbreak response agility. A first-hand perspective is provided, identifying potential bottlenecks stemming from lagging customer verification and sequence screening approaches. Concrete solutions, such as pre-verification of first responders, priority processing channels, pre-approval of standard countermeasure sequences, and optimized computational screening, are proposed to mitigate these challenges and ensure rapid response capabilities without compromising biosecurity. Second, a machine learning architecture for biological sequence modeling, “Lyra” is presented. Lyra is grounded in the biological principle of epistasis and leverages state space models (SSMs) combined with projected gated convolutions to efficiently capture both local and long-range sequence interactions. We demonstrate new mathematical theory to connect SSMs with the approximation of polynomial functions - key to predicting epistatic effects. This subquadratic architecture achieves state-of-the-art performance on diverse biological tasks, including protein fitness landscape prediction, RNA function prediction, and CRISPR guide design, while utilizing substantially fewer parameters and computational resources than existing foundation models like transformers. The thesis concludes by highlighting the synergistic potential of advanced machine learning and thoughtful policy to significantly improve pandemic preparedness.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163318</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>In-or-Out: Creators’ Odyssey for Success</title>
<link>https://hdl.handle.net/1721.1/163317</link>
<description>In-or-Out: Creators’ Odyssey for Success
Li, Zelin
The creator economy is flourishing, driven by shifts in advertising budgets and a surge in the supply of content creators. This has introduced a new challenge for firms: identifying which early-stage creators will grow to become stars. By identifying future stars, firms can choose who to invest their scarce resources in. They may also be able to purchase effective influence at a (proportionately) lower price than what they will pay once a creator becomes a star. Past research has shown that predicting which content will become viral is challenging. Instead, we focus on using content to predict which early-stage creators will grow their follower bases. We measure both the positioning of a creator’s early content and how the creator adjusts this positioning. We find that the initial position is not predictive of future success. However, subsequent adjustments in position are predictive, particularly if the creator’s initial follower base has grown consistently, rather than over a short period of rapid (viral) growth. Our insights inform the construction of predictive models that outperform baseline models in out-of-sample predictions of which creators will grow their followers the fastest.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163317</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining</title>
<link>https://hdl.handle.net/1721.1/163316</link>
<description>Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining
Moscoso Restovic, Rodrigo Y.
Through a game-theoretic methodology this thesis examines collaborative approaches to managing water infrastructure within Chilean mining operations. The research examines cooperative stakeholder interactions to tackle water scarcity and growing demand in Chile's mining industry among mining firms, local residents and regulatory bodies. It utilizes game theory with a focus on cooperative games and bargaining models to develop a structured analytical framework for analyzing stakeholder dynamics including their incentives and cooperative opportunities.&#13;
The thesis centers on creating a mathematical model that shows stakeholders as rational entities who seek to maximize their benefits while facing resource constraints and regulatory limitations. The implementation of cooperative game theory allows for detailed examination of coalition building processes along with resource sharing agreements and benefit allocation practices which helps to define stable cooperative possibilities.&#13;
The primary findings show that mining companies achieve greater efficiency gains through water infrastructure collaboration than through separate individual investments. This thesis presents quantitative evidence that partnerships among mining projects generate significant financial savings and lead to better resource usage and positive environmental and social results.&#13;
Sensitivity analyses identify that cooperative stability depends on several critical factors, including the asymmetries existing in the different mining projects, the sequence in which investment decisions are made, and the transfer price for water selling for those projects that prefer free rides. The final part of the thesis presents concrete suggestions for policymakers and industry leaders to develop cooperative frameworks through specific policy mechanisms and incentive systems that support long-term collaboration.&#13;
The study advances existing academic knowledge by utilizing detailed game-theoretic approaches to address practical problems in sustainable mining practices. The findings reveal that strategic partnerships serve as fundamental tools for managing resources which can effectively tackle the urgent water scarcity challenges Chile faces.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163316</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Driving Manufacturing Best Practices Using Multimodal AI</title>
<link>https://hdl.handle.net/1721.1/163315</link>
<description>Driving Manufacturing Best Practices Using Multimodal AI
Zachary, Mark
Multimodal artificial intelligence offers promising solutions for enhancing operational excellence in contract manufacturing, where small job shops typically operate with limited standardization and high process variability. This research develops a part similarity tool that integrates geometric, material, and scale information to improve quoting accuracy and engineering efficiency in high-mix, low-volume production environments. After examining the fragmented manufacturing landscape and reviewing current AI applications in manufacturing, the study introduces an approach based on Variational Autoencoders for encoding 3D geometry alongside material properties and dimensional scale information. The technical implementation addresses challenges of multimodal fusion, missing data handling, and computational efficiency, while a qualitative ablation study demonstrates how this comprehensive approach outperforms single-modal methods in manufacturing relevance. Engineers benefit from improved insights for manufacturing planning, while estimators achieve more consistent cost predictions using the multimodal system. Reinforcement learning with human feedback provides a mechanism for continuous refinement, creating a framework that bridges geometric similarity with manufacturing context and reduces subjectivity in critical business processes. The research contributes both theoretical insights into multimodal learning and practical implementation strategies for standardizing operations in contract manufacturing environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163315</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions</title>
<link>https://hdl.handle.net/1721.1/163314</link>
<description>Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions
Zeng, Bob
This research explores the surge of Chinese manufacturing investments in Mexico as a strategic adaptation to recent global trade disruptions, specifically the U.S.–China trade tensions and the COVID-19 pandemic. By analyzing Chinese firms' motivations and strategies, the study highlights how they leverage Mexico’s strategic geographic proximity, favorable trade conditions under the USMCA, competitive labor market, and established industrial infrastructure to secure continued access to the North American market while minimizing tariff impacts and supply chain risks. Sector-specific analyses of the automotive, electronics, and renewable energy industries reveal distinct operational, regulatory, and cultural challenges encountered by these companies during their transition to Mexican production facilities. In addressing these challenges, Chinese firms have adopted strategies such as supply chain localization, rigorous adherence to North American regulatory frameworks, and effective cross-cultural management practices. Furthermore, the analysis situates this trend within the broader geopolitical context, emphasizing the role of evolving U.S. trade policies and proactive Mexican industrial initiatives in shaping the nearshoring landscape. The findings suggest that while Chinese investment in Mexico presents significant opportunities for industrial upgrading and enhanced bilateral cooperation, the longevity and effectiveness of these ventures depend on firms' strategic flexibility, deeper integration into local economies, and adept management of complex geopolitical and regulatory environments. By evaluating these elements, the research provides valuable insights into the drivers behind the increased Chinese presence in Mexico and the broader implications for global trade patterns, supply chain resilience, and regional economic integration.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163314</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets</title>
<link>https://hdl.handle.net/1721.1/163313</link>
<description>Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets
Zhu, Yuan
This thesis examines the strategies and operational practices of Chinese fintech entrepreneurs in sub-Saharan African markets, with a focus on how they navigate regulatory fragmentation, localize business models, and build trust in low-infrastructure environments. Drawing on fieldwork and semi-structured interviews with founders, executives, and product leads from fifteen China-linked fintech firms across Nigeria, Kenya, and Francophone Africa, the study investigates how these actors engage with underdeveloped financial systems while adapting knowledge and models from China’s digital finance ecosystem. The research identifies several distinct approaches to market entry and adaptation, including platform integration, compliance-focused positioning, and informal ecosystem engagement. Findings suggest that these ventures do not simply export Chinese models but instead reconfigure them in response to local constraints in regulation, consumer trust, and institutional capacity. By analyzing firm-level strategies in diverse regulatory and market settings, this study contributes to broader discussions on transnational entrepreneurship, financial infrastructure development, and the evolving role of private actors in advancing digital inclusion across emerging economies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163313</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop</title>
<link>https://hdl.handle.net/1721.1/163312</link>
<description>A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop
Carson, Alix
Job shops with semi-autonomous work centers must understand their capacity utilization and financial state to maximize efficiency and profitability. Machine monitoring software allows managers to see the state of machines at any time and capture real-time capacity utilization. Job shops are positioned to maximize these work centers and must connect their manufacturing and operations strategy to the real-time shop data to maximize efficiency. This research is a case study in how a job shop can create a right-to-win strategy targeting jobs that are compatible and profitable for semiautonomous machines.&#13;
&#13;
ADDMAN Precision Baltimore (APBAL), a precision machine shop in the aerospace and defense industry, is facing labor constraints and underutilized work centers. This research aims to develop a structured quoting strategy and strategic pricing model to optimize job allocation between APBAL’s two semi-autonomous machining centers: the Makino Machining Complex 2 (MMC) and the Fanuc Robodrill. By integrating qualitative observations, historical job data, and machine utilization metrics, this study identifies inefficiencies in current job assignment practices. Key findings indicate that aligning work center assignments with projected profitability and capacity utilization can improve overall efficiency. A decision-making framework and pricing matrix are proposed to enhance job quoting accuracy, optimize machine usage, and increase APBAL’s competitiveness in securing high-volume contracts. The results offer a scalable framework for APBAL and its parent company, ADDMAN Engineering, to deploy across other machining facilities, ultimately improving operational performance and financial outcomes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163312</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Technoeconomic Model for Maritime Applications of Green Power Technologies</title>
<link>https://hdl.handle.net/1721.1/163311</link>
<description>A Technoeconomic Model for Maritime Applications of Green Power Technologies
Tuana, Daniel I. S.
Growing societal and regulatory pressures are causing industries around the world to consider greener alternatives to conventional fossil fuel power technologies. As a result, power solution suppliers like CAT are facing strategic uncertainties:  if, where, and when their core product markets will be disrupted by the novel adoption of alternative technologies. With the intention of helping to inform CAT’s future product and service strategy in conjunction with previous research related to powering mines and data centers, this thesis outlines the development of a code to estimate and compare the total cost of ownership of battery, hydrogen fuel cell, and nuclear power technologies to incumbent fossil fuel-driven systems in a variety of maritime scenarios including serving shoreside port electricity demand and on-water power demand across a diverse set of vessel segments. &#13;
The code leverages first principles, empirical models, and researched assumptions to model the performance and costs of power systems in response to stochastically generated and deterministic power demand profiles over the useful lifetimes of the assets. For vessel applications, the code also estimates the volumes and masses of the alternative systems as a basis to judge their practicality. Hypothetical power systems for four archetypal ports and six vessel segments (across a range of power nodes) were studied to identify potential opportunities in and adjacent to the marine markets CAT currently serves.&#13;
The outcomes of the study align with conventional intuition regarding the application of the technologies considered. Under certain conditions, the results support the technoeconomic case for the implementation of battery technology on short-haul vessels whose operations are predictable and would not be disrupted by shortened refueling/recharging intervals. Similarly, the results show that adoption of small modular nuclear reactors at ports and on large vessels with consistently large baseload power demand can provide economic advantages over incumbent fossil fuel technologies. The results of the simulations are sensitive to several technology-agnostic parameters including discount rates, fuel and electricity prices, demand growth rates, and other macro-economic conditions. In future, with ample case-specific data, the code developed for this thesis may provide convincing justification for the adoption of an alternative technology to serve the power demand of an individual port or vessel.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163311</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discrete Event Simulation as a Predictor for Factory Traffic Management</title>
<link>https://hdl.handle.net/1721.1/163310</link>
<description>Discrete Event Simulation as a Predictor for Factory Traffic Management
Ramirez Echavarria, Esteban
Manufacturing environments increasingly rely on automation and data-driven decision-making to optimize efficiency and production rates. This study explores the application of Discrete Event Simulation (DES) to model material flow and predict AGV (Automated Guided Vehicle), crane, and cart movements within a factory. The goal is to develop a digital twin that enables real-time decision-making, optimizes scheduling, and minimizes bottlenecks.&#13;
&#13;
To achieve this, we utilize SimPy, an open-source Python-based DES library, in conjunction with a custom-built API and React.js front-end interface. The study evaluates available DES software options and justifies the selection of SimPy based on flexibility, integration capabilities, and its suitability for modeling custom business rules. The solution is structured into modular components handling path planning, transporters, flows, stations, hot-cold starts, and utilities, ensuring adaptability to future improvements.&#13;
&#13;
A validation framework was established, utilizing historical data comparison and real-time validation to assess the simulation’s predictive accuracy. Over a 40-day testing period, the simulation achieved 89.6% accuracy and a sensitivity, or true positive rate (TPR), of 80.2%. The simulation provides a reliable first-pass scheduling tool that can be further refined with improved data collection.&#13;
&#13;
The findings indicate that while full automation of AGV deployment is not yet feasible, this study lays the foundation for future integration with the factory’s Vehicle Management System (VMS). Business implications include the potential for automated scheduling, enhanced material flow visibility, and optimization of capacity planning. Future work should focus on improving data accuracy, integrating live factory data streams, and refining algorithms for predictive scheduling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163310</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry</title>
<link>https://hdl.handle.net/1721.1/163309</link>
<description>From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry
Netteberg, Sofie F.
This thesis presents the development and implementation of a new product placement optimization model for a large global apparel and footwear company’s supply chain, aimed at maximizing network-wide profits while aligning with long-term strategic goals amidst demand volatility. The model leverages a mixed-integer linear programming approach, integrating probabilistic demand simulations to optimize the placement of new products within the company’s existing network of third-party partner company factories. Key elements of the model, including decision variables, price and cost coefficients, an objective function, and constraints that reflect operational realities and strategic priorities, are discussed in detail. Through analysis and results validation, this research demonstrates how data-driven optimization can improve network profitability and adherence to companies’ long-term strategic supply chain objectives and develop networks that are more profitable. The thesis then includes an exploration of historic demand variability at the host company, followed by a recommendation to integrate probabilistic forecasting in network planning to generate production networks more robust to volatility in consumer product demand. The findings contribute to advancing data-driven decision-making in supply chain management and offer actionable insights for future product placement strategies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163309</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States</title>
<link>https://hdl.handle.net/1721.1/163308</link>
<description>Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States
Ni, Mengmeng
This thesis investigates how government policy approaches shape regional entrepreneurial ecosystems and influence entrepreneurial strategy in strategic industries across China and the United States. Through comparative analysis of four region-industry pairs—Shanghai's semiconductor sector, Shenzhen's drone technology sector, Boston's biotechnology cluster, and New York's fintech ecosystem—the study examines the dynamic interplay between institutional design and entrepreneurial behavior. Drawing on Porter's Cluster Theory, Mazzucato's Entrepreneurial State concept, and the MIT REAP framework, the research develops a novel policy categorization encompassing four innovation governance tools: Cluster and Crisis Response Tools, Innovation Ecosystem Tools, Market-Shaping Tools, and Institutional Restructuring Tools. A qualitative case study methodology is employed, with in-depth firm-level analyses of Biren Technology in Shanghai and Moderna in Boston illustrating how entrepreneurs strategically respond to distinct institutional environments. The findings reveal four distinct models of innovation governance: Shanghai’s state-directed coordination, Shenzhen’s regulatory experimentation, Boston’s market-based orchestration, and New York’s regulation-centered oversight. Across contexts, entrepreneurs emerge as interpretive agents who actively leverage, adapt to, and at times reshape institutional conditions. This thesis contributes to the literature by offering comparative insights into the co-evolution of public policy and entrepreneurial strategy. It also provides practical implications for policymakers designing innovation ecosystems and for entrepreneurs navigating increasingly complex regulatory and technological landscapes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163308</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain</title>
<link>https://hdl.handle.net/1721.1/163307</link>
<description>Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain
Gosen Cappellin, Carlos Daniel
The medical technology company MedTechCo, specifically its Spine division, has deployed millions of implants in hospitals to meet demand. When inventory deployment and allocation are not managed appropriately to ensure that products are in the right place at the right time, excess inventory arises. Currently, MedTechCo Spine holds large amounts of excess inventory that are not utilized effectively. &#13;
&#13;
The objective of this research is to leverage a data-driven approach to define and reduce implant excess inventory at scale for MedTechCo’s Spine business unit in the United States. The research strategy used in this thesis begins with a root cause analysis to understand the causes of excess inventory. A robust data model was then developed to determine appropriate inventory levels by SKU, map all excess field inventory, and prioritize the most valuable excess SKUs. This data model was used to&#13;
automate the company’s ERP system to repurpose excess inventory, limit unnecessary inventory deployments to the field, and eliminate redundant backorders. Finally, an impact analysis was performed to measure the potential excess inventory reduction in both dollar value and units. &#13;
&#13;
Time constraints limited the implementation of the recommendations during the research period. However, MedTechCo Spine agreed to incorporate the proposed recommendations into its ERP system and operational processes in mid-2025. These recommendations will help reduce implant excess field inventory, unlocking tied-up capital, creating flexibility in the supply chain to meet demand changes, and enabling additional investment in innovation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163307</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency</title>
<link>https://hdl.handle.net/1721.1/163306</link>
<description>AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency
Jaklis, Cyril
Real estate is the world's largest untapped market, at $650 trillion (Statista, 2023), yet technological innovation, particularly in financial underwriting, is underrepresented. Excel spreadsheets, broker-driven data collection, and expensive public database subscriptions are still used by most institutional players and family offices. These outdated approaches result in inefficiencies and higher operational expenses. Firms are now waiting for more innovative tools to improve their workflows and predict their Net Operating Income (NOI). Development and maintenance costs are often underestimated due to optimistic estimates and unplanned material or labor cost price escalations. This paper examines how to increase the accuracy of underwriting by examining the full underwriting process, identifying operational inefficiencies, and analyzing how new technologies like Artificial Intelligence (AI) and Machine Learning (ML) are currently being utilized to better value properties and reduce error margins. The analysis covers the entire underwriting process, from data sourcing, collection, structuring, and analysis. It also reviews the platforms and software tools utilized to connect these phases, from initial appraisal to investment memo and investment committee (IC) decision-making. The objective is to understand practical constraints, recognize opportunities for optimization, and explore where investors can strategically position themselves to leverage these technologies while also providing a forward-looking outlook on the changing function of AI/ML in the sector over the next decade.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163306</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery</title>
<link>https://hdl.handle.net/1721.1/163305</link>
<description>Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery
Fenstermacher, Andrew D.
Target Corporation has expanded its Last Mile Delivery (TLMD) capabilities through an omni-channel, "stores-as-hubs" strategy, using stores as fulfillment centers for online orders. The Target Sortation Centers was developed to receive packages from stores in the region, sort, route and dispatch these packages each day to accomplish faster delivery for online orders. Designed to never hold inventory, the goal is to have every package received delivered that same day. This presents new operational challenges common for brick-and-mortar retailers that develop an omni-channel strategy. This thesis investigates core processes in Sortation Centers to identify sources of volatility and propose improvements that enhance productivity and on-time delivery while minimizing labor costs and incomplete volume. Many of the current processes in Target’s Sortation Centers are manual and unstandardized. Moreover, improving operations and piloting changes is challenging, especially during peak seasons. To address these challenges, this study employs discrete event simulation (DES) using SimPy, informed by current operational data and in-person observations, to model and analyze current processes. Key findings reveal that pre-sorting TLMD volume from other national carrier volume at the stores prior to linehaul pick up for same day packages decrease the overall completion times for the day’s volume by 5.8% and lowers incomplete volume probability by up to 85% under excess volume scenarios. These process changes enhance site resilience to demand volatility without significant capital investment. The research underscores the value of DES for testing process improvements virtually and highlights the need for network-level optimization across Target’s omnichannel supply chain. Recommendations include piloting floor loading and pre-sorting in select markets, alongside future exploration of performance standards, automation, and standardized processes to further mitigate volatility impacts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163305</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Vision for Cell Line Development</title>
<link>https://hdl.handle.net/1721.1/163304</link>
<description>Computer Vision for Cell Line Development
Albright, Jackson A.
Anomalies in Cell Line Development prove to have significant impact on material and opportunity cost when screening for the Master Cell Bank that is used for all clinical drug development. Cell Line Development scientists spend hundreds of hours collectively identifying anomalies in fluorescent and brightfield imagery to ensure only high-performing cell clones are downselected for testing. The use of computer vision models alleviates this burden on scientists and better standardizes the selection process. Three techniques were tested for classifying anomalous and nominal fluorescent images: an autoencoder, an edge CNN and an RGB SVM. Examining performance through composite metrics such as F1 Score and MCC, the autoencoder (0.8744 and 0.8619, respectively) outperformed the edge CNN (0.8488 and 0.8257) and RGB SVM (0.8343 and 0.8252) for fluorescent anomaly classification. The high performance of the autoencoder came from training solely  on anomalous images and using a percentile-based threshold to classify images on their reconstruction error. Data robustness proved to be an issue, with certain test datasets having worse performance due to inherent variability of images within both nominal and anomalous classes. Gathering and labeling more datasets for training and testing will allow models to learn from this variability and provide higher confidence in model performance for real-time screening applications. Adjusting the structure of the traditional autoencoder to that of a variational autoencoder will also help with learning the variability of images within classes, and improve performance on previously unseen data. Overall, the current iteration of the models proves to be beneficial for anomaly detection in Cell Line Development and demonstrates that some modifications to data sourcing and model architecture could see even better performance. These same techniques could be applied to similar biopharmaceutical applications provided care is taken to properly source clean and labeled image data and construct appropriate model architectures for the images' inherent features.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163304</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes</title>
<link>https://hdl.handle.net/1721.1/163303</link>
<description>Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes
Bieske, Linn
Background: Autonomous vehicle (AV) testing requires extensive real-world data collection, which is costly and time-consuming. Existing simulation techniques struggle to generate high-fidelity sensor data, particularly for multimodal signals like RGB camera images, LiDAR depth maps or LiDAR point clouds. Recent advances in generative AI, specifically diffusion models, offer a solution for improving synthetic driving scene simulations.&#13;
&#13;
Objective: This thesis enhances diffusion-based generative models to: 1) Encode LiDAR depth data into a stable diffusion model’s latent space, 2) Generate simultaneously, consistently and with high fidelity eight RGB camera images, 2D LiDAR depth maps and 3D LiDAR point clouds for a full 360-degrees range, and 3) Evaluate the realism and consistency of the generated sensor data.&#13;
&#13;
Methods: A multimodal, multi-view latent stable diffusion model was trained to generate complete 360’ synthetic driving scenes and simulate camera and LiDAR sensor signals for autonomous vehicles. The generated scenes were evaluated for sensor alignment, realism, and depth accuracy.&#13;
&#13;
Results: The diffusion model produced realistic, spatially consistent camera and LiDAR sensor data, reducing reliance on real-world validation miles and lowering AV testing costs. To further improve the quality of the multimodal driving scene generation it is recommended to retrain the VAE on LiDAR data. &#13;
&#13;
Conclusion: This work advances AV simulation by extending stable diffusion models to multimodal sensor data. Future improvements should focus on real-time generation and expanding to additional sensor types or hardware setups for enhanced simulation fidelity.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163303</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs</title>
<link>https://hdl.handle.net/1721.1/163302</link>
<description>Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs
Liu, Ying
This thesis develops and evaluates a series of predictive models to improve the efficiency of marketing resource allocation in the context of an outbound campaign for a premium membership product. The central objective is to identify customers most likely to respond positively to a membership offer, thereby minimizing outreach costs and maximizing return on investment. The study leverages a dataset from a large retail superstore that includes customer demographics, transactional behavior, and campaign response history. Data preprocessing involved the creation of engineered features such as age and tenure groupings and the transformation of categorical variables into factor types suitable for classification algorithms. Three modeling approaches were applied: classification with logistic regression, classification and regression trees (CART), and random forest. Logistic regression yielded strong predictive performance with an AUC of 0.851 and identified several statistically significant predictors, including spending on wine and meat products, recent purchase behavior, and tenure length. However, its primary limitation lies in its inability to accommodate cost asymmetries, as it lacks the capacity to incorporate a loss matrix which assigns different penalty to false positives and false negatives. The CART model addressed this limitation by introducing a customized loss matrix that reflects the asymmetric cost structure of marketing misclassifications—assigning a higher penalty to false negatives than to false positives. While this cost-sensitive structure aligned better with business objectives, the CART model achieved a moderate AUC of 0.767, reflecting limited classification accuracy and robustness. To overcome these limitations, a Random Forest model was implemented, combining the strengths of ensemble learning with cost-sensitive training. It achieved the highest AUC of 0.864 and allowed for the integration of a loss matrix during training. Feature importance analysis revealed that variables such as number of days since the last purchase, the amount spent on meat products, and a customer's enrollment length with the company were among the most influential predictors of customer response. The model not only improved classification performance but also supported strategic targeting through interpretable outputs. An economic evaluation demonstrated the practical value of the predictive model. Under a loss matrix where the cost of a false positive was set to $2 and a false negative to $10, the Random Forest model reduced total campaign costs by approximately 30% compared to a non-targeted approach. This cost savings translates into a meaningful economic impact, particularly when applied to large-scale campaigns. Overall, the findings support the use of Random Forest with a cost-sensitive design as a superior modeling framework in marketing applications. By aligning machine learning with real-world cost structures, this approach offers both statistical rigor and economic relevance for data-driven decision-making in customer acquisition strategies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163302</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data, Analytics, and Optimization for Production Planning</title>
<link>https://hdl.handle.net/1721.1/163301</link>
<description>Data, Analytics, and Optimization for Production Planning
Malinowski, Maxwell X.
This thesis serves as a case study for the implementation of data analytics and optimization within a high mix low volume electronics production environment in the Aerospace and Defense industry. This case study demonstrates the benefits of data analysis for defining and quantifying operational bottlenecks and explores the implementation of an optimization model to better allocate resources for production planning. Results demonstrate the insights derived from using data and analytics in this environment, and further discussion explores what contributes to an effective implementation of an optimization model in a production setting.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163301</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Industrial Pollution and Firm Ownership Structure: Evidence from M&amp;A</title>
<link>https://hdl.handle.net/1721.1/163300</link>
<description>Industrial Pollution and Firm Ownership Structure: Evidence from M&amp;A
Zhang, Cindy
This paper studies whether firm ownership structure influences pollutive activity. Using facility-level data from the Toxics Release Inventory, I employ a differences-in-differences (DiD) approach to compare toxic chemical release and pollution prevention activity between public and private firms' facilities by exploiting ownership changes. I compare facilities initially owned by private firms that were acquired by public firms and those that were acquired by private firms in the same year. My findings suggest that public acquirers significantly reduce toxic release activity relative to private acquirers. In the reverse case, I find that private acquirers decrease abatement, but pollution volume does not differ significantly. However, for later ownership changes in my sample, private acquirers increase toxic release volume and intensity significantly relative to public acquirers. Lastly, I explore how financial constraints and the local political environment moderate pollution activity. Debt-constrained public acquirers show no significant difference in pollution activity from private acquirers. In Democrat-leaning counties, public acquirers reduce toxic releases more than private acquirers, but in Republican-leaning counties, the differences are less pronounced. Overall, my findings suggest that public firms have decreased toxic release activity over time, but the declines have been offset by increases from private firms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163300</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Debt Complexity and Equity Behavior</title>
<link>https://hdl.handle.net/1721.1/163299</link>
<description>Debt Complexity and Equity Behavior
Li, Jack
I examine how the complexity of firm debt affects the incorporation of news into equity prices. As residual claimants to firm cash flows, equity investors must be able to value all outstanding debt contracts, suggesting that complex debt can interfere with their ability to process news effectively. Using a model in which debt complexity causes a subset of investors to initially underweight news precision, I derive three predictions for the equity behavior of debt-complex firms around news events: (1) they exhibit greater post-announcement drift, (2) they show elevated trading volume both on announcement day and in the post-announcement period, and (3) their return volatility decreases on announcement day but increases during the post-announcement period. These predictions are supported by empirical evidence in the context of earnings announcements, suggesting that debt complexity introduces meaningful frictions in how news is incorporated into equity markets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163299</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping Wildness: Simulating Post-Extraction Wildland Regeneration</title>
<link>https://hdl.handle.net/1721.1/163298</link>
<description>Mapping Wildness: Simulating Post-Extraction Wildland Regeneration
Griggs, Crystal Ling
This thesis introduces a novel approach to wildlife habitat classification for ecological regeneration. It is focused by the extreme environmental degradation of mountaintop removal (MTR) in the Appalachian Mountains, a violent coal extraction process that has significantly altered the landscape of this ecologically sensitive region. By integrating remote sensing and Geographic Information Systems (GIS) with machine learning, this research aims to develop a method that transcends traditional human egocentric landscape assessments, advocating for a model that foregrounds the habitats and needs of critically endangered species by simulating landscape regeneration and assessing topographical alterations in terms of how design decisions impact wildlife. Central to this study is the concept of Umwelt, the subjective experiences of nonhuman species, including how their spatial perception and spectrum are used to discern details within their environment. Umwelt broadens traditional spatial understanding by emphasizing that each species experiences the world through its sensory filters, which shape its interactions within their habitat. This understanding guides the research’s approach to approximating the Umwelt of the Cerulean Warbler (Setophaga cerulea), a surrogate species in this work, which has faced steep declines due to habitat loss in Appalachia. Through the development of a habitat suitability model that utilizes advanced computational tools and multispectral imagery, the thesis endeavors to offer a new perspective on environmental planning and conservation efforts - a computational approach to near-approximations of Umwelt. The methodological framework seeks not only to classify post-extraction landscapes for their potential in supporting wildlife but also to inform design and land use decisions that are sensitive to the temporal and complex processes of natural habitat regeneration. By challenging the prevailing paradigms of landscape restoration, which often lack consideration for the intricacies of wildland dynamics such as the multitudes of species interactions and interdependencies, this research proposes a new methodology that empowers wildlife to guide the ecological recovery process. The findings underscore the potential of applied GIS and machine learning in environmental advocacy, setting a precedent for future research and practice aimed at the regeneration of ecosystems that considers the ecological realities of all species involved.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163298</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications</title>
<link>https://hdl.handle.net/1721.1/163297</link>
<description>Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications
Ray, Jennifer
As climate change concerns drive the need for decarbonization, hydrogen stands as a potential tool to help reduce emissions across the United States industrial and energy sectors. This thesis develops a flexible modeling framework for hydrogen adoption across multiple industrial applications, designed specifically to support strategic investment decision-making in an evolving market. The tool analyzes six major industries – steel, chemicals, energy storage, biofuels, vehicles, and natural gas– through two metrics: potential hydrogen consumption and threshold prices for economic viability. The framework applies scenario analysis to examine how government policy and technological advancement influence potential market trajectories.  &#13;
&#13;
Analysis reveals significant sensitivity to input assumptions. Even small variations in the assumed initial hydrogen production cost can result in significantly different adoption timelines. In scenarios where initial hydrogen production costs are $5/kg, widespread adoption requires maximum policy support and technological progress. However, reducing the initial cost by just $1 to $4/kg makes broader adoption feasible with less reliance on government intervention. Light-duty fuel cell electric vehicle penetration rate and steel industry growth rate emerge as the most sensitive parameters affecting overall hydrogen demand, followed by biofuel blending rate and hydrogen injection percentage into natural gas infrastructure.&#13;
The vehicles industry is identified as a first mover in widespread hydrogen adoption, followed by steelmaking and methanol production. Hydrogen adoption for natural gas blending, methanol for export, and methanol-to-gasoline applications occur later due to their lower threshold price for economic viability. Under optimal conditions with strong government support and significant technological advancements, total hydrogen demand could reach 48.8 million metric tons by 2050, approximately a sevenfold increase from scenarios with minimal support.&#13;
The tool’s value lies not in projecting a definitive, single-point forecast, but in providing a flexible framework that helps stakeholders navigate market uncertainties as the decarbonization landscape evolves. Future research should integrate supply-side dynamics, infrastructure requirements, and geographic variability to enhance projection accuracy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163297</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Green Aluminum</title>
<link>https://hdl.handle.net/1721.1/163296</link>
<description>Towards Green Aluminum
Schurr, Kevin
Aluminum is an important metal to facilitate the energy transition. Its high strength to weight ratio and easy recyclability make it a useful material in many industries from automobiles to food packaging. However, the aluminum smelting process accounts for 2% of all global greenhouse gas emissions due to both the high amount of power needed to facilitate the electrolysis reaction and to the consumption of carbon anodes in the process. As regulatory changes in Europe raise the monetary cost of emitting carbon, smelters are investigating new technologies to integrate into their operations to cut Scope 1 and 2 emissions. Two such technologies are carbon capture systems to abate process emissions and small modular nuclear reactors to reduce emissions incurred during electric power generation. This work explores the technical and economic feasibility of leveraging these systems at Aluminum of Europe, a primary aluminum smelter subject to these changing European regulations. Results suggest that while these technologies have not been specifically adapted for aluminum production yet, they can play an important role in reducing the overall emissions from the smelting process under specific economic conditions. However, the analysis indicates that, at present, significant subsidies are required for such projects to be financially viable.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163296</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Corporate Transparency and Cybersecurity Risks</title>
<link>https://hdl.handle.net/1721.1/163295</link>
<description>Corporate Transparency and Cybersecurity Risks
Kim, David Sunghyo
I study whether disclosure mandates alter the equilibrium of cyberattacks by unintentionally informing cybercriminals. The California Consumer Privacy Act (CCPA) requires companies to disclose their personal information collection practices to consumers, inadvertently informing cybercriminals about the potential benefits of breaching each firm. Using a difference-in-differences design, I find that firms disclosing the collection of valuable personal data face an increased probability of data breaches. These firms also strengthen their cyberdefenses both in terms of cybersecurity software and cybersecurity specialists. Firms trade off cybersecurity costs against the risk of data breaches, with the increase in breach probabilities more pronounced among firms that invest less in cybersecurity. Finally, I find that firms adjust their data collection policies as additional defense strategies. Overall, this study highlights the trade-off between transparency and cybersecurity risks in today’s digital economy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163295</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care</title>
<link>https://hdl.handle.net/1721.1/163294</link>
<description>Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care
Dugan, Andrew D.
Cardiogenic shock (CS) in the context of acute myocardial infarction (AMI) remains a significant challenge in critical care, with high mortality rates despite the availability of advanced mechanical circulatory support (MCS) devices like the Impella pump. However, adoption of these devices in clinical practice remains limited. This thesis explores two complementary strategies to address these challenges: developing machine learning (ML) models to predict shock severity and assessing the feasibility of integrating hospital Electronic Medical Record (EMR) data into Abiomed’s digital ecosystem to support standardized shock care.&#13;
In the first phase, ML models were trained on multiple clinical datasets to predict Society for Cardiovascular Angiography and Interventions (SCAI) shock stages based on patient data. While these models demonstrated strong predictive performance, feature analysis revealed that SCAI stages often reflect physician treatment decisions rather than purely patient physiology. This raises concerns about their utility as real-time clinical decision tools and suggests that ML applications may be better suited to prompting early data collection and intervention before severe shock develops.&#13;
The second phase evaluated the feasibility of EMR integration to support the broader adoption of standardized shock protocols. After considering regulatory, operational, and technical factors, third- party data aggregation emerged as the most practical path forward. Integrating EMR data could improve outcome tracking, support protocol adoption, and strengthen partnerships between Abiomed and hospitals, creating a foundation for more consistent and proactive shock management.&#13;
Together, these findings highlight the need for predictive tools that guide early clinical action and infrastructure that supports seamless data integration. By advancing both, Abiomed can expand its role in cardiogenic shock care, improve patient outcomes, and lead the evolution of data-driven, standardized treatment strategies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163294</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape</title>
<link>https://hdl.handle.net/1721.1/163293</link>
<description>Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape
Tike, Gauri
The automotive industry is undergoing a transformative shift with technological advancements in many areas such as Electric cars, Autonomous vehicles, Software Defined Vehicles, and decarbonization of mobility. Alternate means of transportation are also becoming available and sometimes even the cost is even lower than owning a car. The best way to get from point A to point B might not be the car in some of the cities. It might involve heterogenous modes of public transportation, using a bike, using ride hailing service or using a car for different portions of the route. Despite the concerns about the environment, we are still seeing an increase in global car ownership trends. These changing times pose challenges to legacy automakers. While they are experts in traditional car manufacturing, modern cars not only require traditional mechanical and electrical skills but also need deep expertise in developing software for these cars. With the growing EV adoption, we are seeing Chinese EV automakers are capturing market share quickly. What is the future of mobility with all these developments? What do traditional automakers need to do in this era to remain successful? In this report we will examine key trends in mobility: Global electric vehicles (EVs) adoption, software-defined vehicles (SDVs), autonomous vehicles (AVs), environmental implications. Based on this research we will propose strategic recommendations for traditional automakers in order to continue their success over the next decade and beyond.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163293</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Navigating Fintech Innovations: Strategic Insights from the United States and India</title>
<link>https://hdl.handle.net/1721.1/163292</link>
<description>Navigating Fintech Innovations: Strategic Insights from the United States and India
Shanbhag, Rishabh Ganesh
This thesis examines how fintech ventures are reshaping financial services through new technologies and strategic choices tailored to different markets. It first looks at key innovations: digital payments, digital wealth management, and open banking, and how they have transformed everyday financial activities. The research then compares how fintech companies operate in the US and India by analyzing how market conditions, government initiatives, regulations, and consumer behaviors shape adoption. Finally, through case studies of Robinhood (US), Revolut (Global), and Paytm (India), the thesis examines how fintech firms navigate the choice between competing with traditional players and collaborating with them to scale under different market scenarios. Together, these insights aim to help entrepreneurs, investors and policymakers understand how strategy and technology come together in the fintech industry.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163292</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing</title>
<link>https://hdl.handle.net/1721.1/163291</link>
<description>Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing
Harkavy, Rachael
This thesis develops a digital framework for simulating and validating thermoplastic composite manufacturing processes, focusing on reducing the time associated with new product development. Using Finite Element Analysis (FEA) software (SimSof) and high-precision 3D scanning tools (ScanSof), the research introduces a geometric similarity metric to quantify deviations between simulated and real-world parts. By aligning simulations with production data, the study aims to replace costly physical trials with reliable digital models, accelerating customer onboarding and improving&#13;
manufacturing efficiency.&#13;
&#13;
Key contributions include establishing a systematic pipeline for integrating simulation tools into Oribi Composites’ workflow, defining critical parameters such as laminate width, material card accuracy, and mesh size, and validating their impact on simulation accuracy. Results demonstrate that accurate material modeling and parameter selection significantly enhance digital twin accuracy, while mesh size has minimal influence, allowing for computational cost savings. The research also highlights challenges in replicating real-world conditions digitally, including inconsistent material cards, and limited control over pressure profiles. Despite these limitations, the study proves that simulations can reliably predict manufacturable designs within&#13;
customer tolerances, reducing reliance on physical iterations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163291</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement</title>
<link>https://hdl.handle.net/1721.1/163290</link>
<description>Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement
Imaeda, Hiroko
Despite Japan’s reputation as an economically advanced nation, it faces one of the highest relative poverty rates among OECD countries, with nearly half of all single-mother households living below the poverty line. This thesis examines why poverty among single mothers persists despite a formal support ecosystem and proposes a systemic redesign grounded in life-stage-aligned, user-centered principles. Drawing on historical-institutional analysis, organizational theory, fieldwork interviews, and auto-ethnographic insights, the study identifies deeply embedded barriers that reinforce fragmented, crisis-oriented support systems misaligned with real-life trajectories. In response, it introduces the "Single Mother Journey" framework, reframing single mothers not as a static category but as a dynamic population with distinct, evolving needs. Through this lens, the thesis exposes critical gaps in preventive support, labor market misalignment, and information accessibility. Building on these findings, it proposes a future-ready support ecosystem, positioning corporations, local municipalities, NPOs, and education institutions as collaborative actors. It presents mumtec, a conceptual digital platform designed to consolidate fragmented services, personalize interventions by life stage, predict crisis points, and generate adaptive policy feedback. The thesis moves beyond surface-level critique by connecting institutional analysis with practical system design to offer a scalable framework for inclusive innovation. Listening to the silent voices of single mothers navigating precarity is an ethical imperative and a strategic necessity for sustainable, resilient societies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163290</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing automotive production scheduling to reduce finished vehicle inventory</title>
<link>https://hdl.handle.net/1721.1/163289</link>
<description>Optimizing automotive production scheduling to reduce finished vehicle inventory
Johnson, Christopher
This thesis addresses inefficiencies in automotive finished vehicle inventory management arising from misalignment between production scheduling and outbound logistics. Traditional production planning prioritizes manufacturing efficiency, causing significant inventory accumulation as vehicles await completion of full shipment loads. This research proposes an Integrated Production and Outbound Distribution Scheduling approach, introducing an optimization step within existing production scheduling workflows to align production sequences for expedited load formation. Back-testing on two automotive assembly lines over 82 weeks reveals a mean inventory reduction potential of 63–65%, with variability influenced by production volumes and vehicle configurations. A proof-of-concept implementation confirms the practical feasibility of optimized schedules, reducing inventory holding times by 33% without disrupting manufacturing operations. Computational performance analysis demonstrates good scalability for instances with fewer than 600 vehicles, though larger scenarios still yield meaningful inventory reductions. This work highlights substantial opportunities for automotive original equipment manufacturers to enhance efficiency by integrating outbound logistics into production scheduling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163289</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management</title>
<link>https://hdl.handle.net/1721.1/163288</link>
<description>Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management
Gallardo Moncayo, Gabriel A.
The increasing availability and reduced cost of Generative AI applications for the general public have motivated organizations across all industries to implement AI-based solutions in their daily operations. Still, they struggle to determine the capabilities and limitations of this technology when implementing it in their specific context. This thesis addresses these challenges through a practical case study: deploying a text-based Generative AI system (using Large Language Models - LLMs) for automated downtime event characterization within a global industrial operational technology (OT) setting by transforming unstructured&#13;
problem management reports into structured, actionable business insights. The developed software system contains a data pre-processing stage, followed by four LLM-based tasks (LLM-extraction, LLM-autoclassification, multi-aspect multi-level LLM-classification, and LLM-accuracy). We wrap everything in a well-structured and easy-to-understand evaluation framework that ensures the system’s output is format-reliable, accurate, and consistent. Through simple prompt engineering techniques and continuous failure modes analysis, we achieve high accuracy (&gt;89%) and consistency (&gt;79%) for downtime events characterization at 1% of the current cost. In the end, we prove that it is possible to implement an AI-based solution within current operational processes while properly communicating its capabilities and limitations and adapting its usage to the most added value purpose.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163288</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support</title>
<link>https://hdl.handle.net/1721.1/163287</link>
<description>Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support
Gebner, Adam R.
This thesis investigates methods to improve demand forecasting and inventory management for raw wire. Challenges such as supply chain disruptions from the COVID-19 pandemic, operational variability, and loss of expertise exposed vulnerabilities in the existing manufacturing system, leading to shortages and inefficiencies. By leveraging extensive production data, this research develops and evaluates tools to mitigate these issues while aiming for a 100% service rate.&#13;
The project leveraged extensive production data to predict future wire requirements, optimize inventory, and achieve a 100% service rate. Key contributions include:&#13;
1. A data-driven demand simulation model, reducing forecast error and surpassing&#13;
baseline methods&#13;
2. Quantification of waste distributions and variability in wire consumption&#13;
3. An inventory simulation framework for policy evaluation and shortage mitigation&#13;
4. Clustering analysis to classify demand patterns and identify key wire categories&#13;
5. A decision support tool supporting real-time visibility into inventory levels and risks&#13;
The models and tools developed through this project provide enhanced capabilities to predict future wire requirements and manage inventory more effectively through continued development. Though the initial results indicate potential business value, areas for future work include incorporating additional data sources, exploring advanced machine learning techniques, and conducting longer-term pilot studies to quantify business impact. This project demonstrates the value of leveraging data analytics and simulation modeling to enhance supply chain decision-making in complex manufacturing environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163287</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction</title>
<link>https://hdl.handle.net/1721.1/163286</link>
<description>Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction
Gerbino, Jacob
This thesis aims to develop a lean manufacturing framework with the goal of optimizing the use of floorspace in Boeing's Interiors Responsibility Center South Carolina (IRCSC). The primary goal is to eliminate wasted floorspace while increasing production capacity and efficiency. The motivation behind this project stems from the need to address the fully allocated production floorspace at IRCSC and the pressing requirement to add new product lines without expanding the facility's physical footprint. Additionally, the project seeks to prepare IRCSC for possible increases in production rates for the 787 Dreamliner Program, necessitating a redesign of work centers to support higher output levels while enhancing efficiency and reducing costs.&#13;
&#13;
The project employs the DMAIC (Define, Measure, Analyze, Improve, Control) methodology and lean tools such as spaghetti diagramming and value stream mapping to treat "Misused Space" as an additional form of waste, alongside the traditional forms of lean waste. The framework was applied to a sample interior product work center to test its effectiveness. The study involved mapping the current layout, observing technician travel, conducting time studies, and analyzing value stream maps. The methodology facilitated the creation of a new floorplan and scheduling system that consolidates cure times and balances workloads between work cells. Discrete event simulation was used to validate the proposed changes, ensuring they would achieve the desired improvements.&#13;
&#13;
The results of the study revealed inefficiencies in the current layout and scheduling practices of the work center. The proposed changes demonstrated a potential 25% reduction in floorspace and a 55% decrease in product throughput time. The new scheduling and work allocation strategy reduced product throughput time from nine days to four, and the new layout reduced worker travel distances by as much as 50% in some work cells. The lean manufacturing principles and scheduling optimizations discussed in this thesis should be applied to other work centers within IRCSC. Future research should explore advanced methodologies and tools to handle the complexities of more interconnected work centers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163286</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers</title>
<link>https://hdl.handle.net/1721.1/163285</link>
<description>The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers
Venkatanarayanan, Sriya
This thesis investigates the barriers and enablers to predictive AI adoption in healthcare through a thematic synthesis of 13 academic articles and real-world case studies published over the last five years. Barriers were categorized into three domains: regulatory, cultural, and strategic. These included challenges such as fragmented regulation, clinician skepticism, data quality limitations, and poor alignment with clinical workflows. Cross-cutting patterns, stakeholder tensions, and recurring meta-themes revealed that these barriers are deeply interconnected. Drawing from over 200 individual findings, an actionable visual framework was developed to guide responsible and sustainable predictive AI integration. The proposed model, consisting of an internal “Pyramid” of enablers and an external “Circular Loop” of ecosystem conditions, provides a practical structure for aligning governance, engagement, and workflow with ongoing commitments to equity, collaboration, safety, and transparency.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163285</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative AI in Private Equity for Accumulative Advantage</title>
<link>https://hdl.handle.net/1721.1/163284</link>
<description>Generative AI in Private Equity for Accumulative Advantage
Mahajan, Bonny
This research explores the use of Generative AI (Gen AI) for achieving accumulative gains across various business and technical functions within commercial enterprises under private equity firms. While based on applied experiments in a private equity-owned, resource constrained portfolio company, many of the findings presented here may apply in other types of organizations.Through this study, we conduct case studies across key departments such as customer service, purchasing, engineering, employee management, and marketing. For each use case, we delve into the utilization of custom-built or publicly available Gen AI-based tools, aiming to understand the unique considerations and challenges that may arise when implementing Gen AI solutions in industries like manufacturing, which have traditionally been underserved by the tech sector.Through this research, we identify the critical role of humans in the loop, emphasizing the importance of UI/UX design, domain expertise, and local culture in the successful adoption and acceptance of Gen AI tools designed to enhance workforce efficiency in portfolio companies. This study also aims to illustrate how investing in Gen AI technologies is ultimately an investment in a company’s most valuable resource—its employees. By equipping employees with innovative tools, the organization not only improves productivity and job satisfaction but also fosters a culture of continuous improvement and adaptability. This research highlights the transformative potential of Gen AI in reshaping traditional business processes and driving sustainable growth in different functions of organizations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163284</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Standard Work for High Mix Low Volume Manufacturing</title>
<link>https://hdl.handle.net/1721.1/163283</link>
<description>Standard Work for High Mix Low Volume Manufacturing
McNulty, Will
This thesis examines the challenges of developing standard work at scale in a high-mix low-volume (HMLV) manufacturing environment. The research is conducted at Re:Build Composite Resources, a thermoset composites (TSC) manufacturer. In the context of the company, impending growth demands more skilled laminators and the manual, complex nature of TSC lamination exposes the need for improved and documented standard procedures. By documenting existing processes through operator shadowing, time studies, and quality data analysis a “best-known” standard was created for the production steps of a subset of parts. Two pilot parts—one focused on cutting scrap rates, the other on boosting throughput—demonstrated how standard work instructions and a standard work schedule designed for one-piece flow significantly reduced errors and production variability. The thesis also explores the effectiveness and limitations of using computer vision as a tool to automate work instruction and time study data set generation. Beyond the immediate improvements in quality, efficiency, and new operator onboarding, the project’s scalable framework lays out a roadmap for broader adoption&#13;
of standard work in fast-growing HMLV operations. By focusing first on parts that yield the most significant gains — either due to high volume or high unit cost — organizations can maximize returns on continuous-improvement efforts while not overburdening their engineering staff with excess analysis and documentation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163283</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing</title>
<link>https://hdl.handle.net/1721.1/163282</link>
<description>Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing
Bhirgoo, Priya Darshini
The pharmaceutical industry relies on high-temperature fluids such as pure steam to support critical operations including equipment cleaning and sterilization and on hot Water-For-Injection (WFI) as a key ingredient for drug substance manufacturing. These high-temperature process-driven heat demands are fulfilled through fossil fuel-based heating which contributes significantly to Scope 1 carbon emissions. Recognizing the link between environmental stressors and human health, Amgen has committed to achieving carbon neutrality by 2027. This thesis explores the feasibility and implications of transitioning from fossil fuel-based process heating to a fully electric system at one of Amgen’s drug substance manufacturing sites. Amgen’s existing fossil fuel-based steam system was analyzed through site visits, engineering reviews, and stakeholder engagements to quantify capital and operating costs, energy usage, and carbon emissions. A fully electric alternative was designed by researching commercial technologies and collaborating with suppliers as well as internal stakeholders. The analysis found that while the capital investment required for electrification is comparable to that of traditional steam systems, the operating costs for an electric system are significantly higher, driven by the higher price of electricity relative to natural gas. From a sustainability perspective, electrification eliminates on-site Scope 1 carbon emissions but shifts emissions to Scope 2, making the environmental benefit dependent on the carbon intensity of the local electricity grid. As grids transition to renewable energy sources, the potential for long-term emissions reductions strengthens. Future work should focus on evaluating the costs of necessary electrical infrastructure upgrades and identifying regions with lower-carbon, lower-cost electricity grids where electrified systems could be more readily implemented.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163282</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations</title>
<link>https://hdl.handle.net/1721.1/163281</link>
<description>Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations
Tchelikidi, Cloe
In mature, competitive sectors such as financial services and media and entertainment, customer loyalty is increasingly difficult to sustain. This thesis explores the emergence of cross-industry partnerships, specifically between credit card issuers and digital entertainment platforms, as a strategic response to rising churn and declining differentiation. Through a case study of the American Express Digital Entertainment Credit, the research examines how lifestyle-aligned benefits can foster deeper behavioral engagement, reduce switching, and enhance customer lifetime value. The thesis situates these partnerships within the broader evolution of loyalty strategies, marked by hyper-personalization, subscription fatigue, and platform convergence. Findings suggest that flexible, recurring rewards embedded in consumers’ daily routines offer a path to durable retention, especially among younger, digital-native cohorts. The study concludes that such partnerships are not peripheral marketing tools but increasingly core to competitive strategy in commoditized markets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163281</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry</title>
<link>https://hdl.handle.net/1721.1/163280</link>
<description>Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry
Wu, Lanchen
This paper explores how financial pressures, regulatory enforcement, and market dynamics interact to shape pharmaceutical manufacturing quality and drug supply stability. Using a causal loop diagram (CLD), it examines how cost-cutting behavior affects control and validation capabilities, interacts with regulatory agency oversight, and contributes to recurring drug shortages. The analysis highlights how competition drive companies to operate at or near the minimum regulatory requirements, gradually eroding quality systems. Because of the nature of medical products, the quality of a drug cannot be directly assessed by individual users, distributors, or payers, making it necessary for government agencies like the FDA to rely on internal manufacturing data to ensure all drugs meet a minimum standard of quality. Regulatory oversight serves as a safeguard rather than a tool for guiding business decisions. However, its effectiveness is constrained by the frequency of inspections, the capacity of auditors, and limited resources—especially when government budgets are stretched and other priorities take precedence. The paper also discusses how manufacturers may avoid detection by strategically presenting information during inspections, making it harder for auditors to spot issues and allowing weakened controls to persist. Over time, these dynamics reinforce one another, creating a self-sustaining cycle in which cost pressures lead to a minimal compliance, quality issues, and regulatory responses that increase costs further. &#13;
As the number of manufacturers shrinks due to market consolidation, supply disruptions become more severe when failures occur. Regulatory discretion—intended to avoid immediate shortages—can unintentionally reduce incentives for long-term quality investment, further weakening the system’s resilience. &#13;
To address these issues, the paper proposes structural changes, including financial accountability for payers during shortages, tighter regulatory focus on process reliability, and linking regulatory flexibility to quality improvement obligations. These approaches aim to create balancing mechanisms that reduce cost-driven deterioration of quality and promote a more stable pharmaceutical supply chain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163280</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ant Group’s Transformative Impact on China’s Financial Industry</title>
<link>https://hdl.handle.net/1721.1/163279</link>
<description>Ant Group’s Transformative Impact on China’s Financial Industry
Pan, Kathryn
Ant Group, China’s leading digital finance company, has fundamentally transformed the nation’s financial industry through groundbreaking innovations in digital payments, micro-lending, wealth management, and investment advisory. This paper explores the company’s role in reshaping China’s financial ecosystem, analyzing its impact on traditional banking institutions, regulatory policies, and consumer behavior. Utilizing analytical frameworks such as Porter’s Five Forces, PEST analysis, and SWOT analysis, this study provides a comprehensive assessment of the external and internal factors influencing Ant Group’s development and competitive positioning.&#13;
This research highlights Ant Group’s key financial innovations, including its online transaction platform, offline payment services, online credit solutions, digital fund distribution channels, and AI-driven investment advisory. By leveraging advanced technologies such as artificial intelligence, blockchain, and big data analytics, Ant Group has enhanced service efficiency, expanded accessibility, and strengthened risk management capabilities. These innovations have significantly advanced financial inclusion, extending financial services to previously underserved populations. However, Ant Group’s rapid growth has also intensified regulatory scrutiny, prompting major restructuring efforts and adjustments to its business model.&#13;
This paper employs three major analytical frameworks: PEST analysis, Porter’s Five Forces, and SWOT analysis. The PEST analysis examines the political, economic, social, and technological factors shaping Ant Group’s trajectory, highlighting the impact of evolving government policies and macroeconomic conditions on its operations. Meanwhile, Porter’s Five Forces framework assesses the competitive dynamics within China’s financial sector, identifying key market pressures such as rising competitions and regulatory constraints. Finally, the SWOT analysis evaluates Ant Group’s internal strengths and weaknesses, as well as external opportunities and threats, offering a comprehensive perspective on the company’s strategic positioning.&#13;
Drawing from these analyses, the paper offers strategic recommendations to ensure Ant Group’s sustained growth and resilience in an increasingly complex financial environment. These recommendations include strengthening regulatory compliance, fostering strategic alliances with both domestic and international partners, and further leveraging technological advancements to expand its service offerings. Additionally, the study explores potential global expansion strategies, considering how Ant Group can adapt its innovative financial solutions to international markets while navigating diverse regulatory landscapes.&#13;
By examining Ant Group’s evolution and the broader implications of its digital finance model, this study contributes to a deeper understanding of fintech’s disruptive power in China’s financial sector. The findings provide valuable insights for industry leaders, policymakers, and scholars interested in the intersection of financial technology, regulation, and strategic business management. As digital finance continues to evolve, Ant Group’s trajectory serves as a critical case study in balancing innovation, regulation, and market competition within a rapidly shifting financial landscape.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163279</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput</title>
<link>https://hdl.handle.net/1721.1/163278</link>
<description>Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput
Sircar, Julia Sarita
Blue Origin is an aerospace company with ambitious throughput goals in response to increased commercial space exploration. Pressure to increase throughput is especially apparent within its BE-4 engine business, as the engines support Blue Origin and its customers. Blue Castings is one of the primary in-house manufacturing plants that supports BE-4 production; the plant manufactures rocket engine components through a process called investment casting. Investment casting, by nature, is a complex process involving long rework times, high incidence of defects, and significant process variability. These characteristics contribute to the discrepancies between Blue Origin’s target BE-4 production rate, the production rate feasible at Blue Castings, and its actual delivery rate. This thesis explores how defect management and prevention techniques can improve throughput at Blue Castings and reduce the number of Blue Origin’s schedule delays attributable to Blue Castings. The research began with a baseline investigation and analysis of Blue Castings’ actual and best-case throughput rates compared to its goal. Two gaps were identified: 1) a gap between actual and feasible throughput, and 2) a gap between feasible and target throughput. The analyses highlight the need for better process and quality management to close both gaps. Through a mixed-method approach, the researcher explored and piloted process and data improvements to understand their impact on throughput. This included qualitative and quantitative data collection through on-site interviews, random sampling of defect data, and queries from the manufacturing execution system. With this data, the researcher investigated how machine learning can predict rework severity and support defect prevention. A case study on a selected part number demonstrated the potential to improve throughput by reducing unnecessary rework. By aligning stock-on surface criteria to downstream machining requirements, average rework loops were reduced from thrice the industry benchmark to below the benchmark. This increased capacity at the rework work center and improved the overall delivery of this part. The research also demonstrated how a cross-functional collaboration to formalize producibility lessons reduces the creation of defects, promotes systematic knowledge-sharing, and accelerates improvements similar to the stock-on surface case study. In parallel, this research evaluated how Blue Castings could improve defect documentation and tracking without causing significant additional effort for operators. The researcher’s findings highlight the limitations of handwritten weld maps and inconsistent data capture practices on effectively preventing defects. Digitization of defect tracking is recommended to enable consistent defect data collection and improved root cause and trend analyses. As data quality improves, applying classification ML models for predictive analytics can scale throughput. This work provides recommendations for Blue Castings to implement mechanisms that reduce rework, improve producibility, and increase throughput to align with Blue Origin’s goals.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163278</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systematic Political Philosophy of Education</title>
<link>https://hdl.handle.net/1721.1/163277</link>
<description>A Systematic Political Philosophy of Education
Pavel, Sonia Maria
My dissertation proposes a fundamental repositioning of philosophy of education relative to political philosophy. I argue that we cannot afford doing political philosophy without a theory of education, just as we cannot afford making philosophy of education modular, insulated from the rest of political philosophy. To this end, I propose a systematic political philosophy of education, meaning both a systematization of existing approaches to education and a comprehensive assessment of their merits and limitations. I reconstruct the main theories of education – liberal, conservative, democratic, and critical – from their most basic social ontological assumptions to their political programs for education. I then argue that they all struggle to realize their goals for education either as a result of flawed social ontological assumptions or because of a failure to institutionalize their commitments in practice. The lessons I draw from these critiques form the basis of my own novel systematic theory of education. My theory combines traditional political philosophy with insights spanning critical theory, social ontology, and education studies. The central goal is to reconfigure the school as a democratic institution of social learning that not only enables the flourishing of all students but helps society as a whole progress. The project advances on two levels: a methodological and a substantive-normative one. Methodologically, I resist a growing tendency towards the unmooring of political philosophy and philosophy of education. This tendency is peculiar both from a historical and a conceptual perspective. Historically, education was a core issue of political philosophy. Many, even most, of the canonical political philosophers started from the assumption that education is a central purpose of political life. In my substantive introduction, I take a historical excursus through the canonical political thinkers who best exemplify this emphasis on education: Plato, Rousseau, and Dewey. For all the differences in their views, all three understood education as essential to realizing their visions. They would have regarded any political philosophy that failed to address education as incomplete. Today, however, few political philosophers address the subject at all, let alone give it pride of place in their theories. This unmooring has had bad consequences for both subfields. Much contemporary work in philosophy of education takes for granted a liberal social ontology and liberal normative commitments without sufficient critical scrutiny. Similarly, most contemporary political theory neglects the topic of education and operates under the assumption of fully formed liberal agents. The lack of conceptual clarity is mirrored in political practice. Education is marred by persistent and seemingly intractable disagreements – from controversies about indoctrination to failures to realize the ideal of equality of opportunity. Our substantive disagreements about education, I argue in my first chapter, are not merely value disagreements about the goals of education. They stem from deep-rooted social ontological assumptions about the nature of human beings and society. But these social ontological assumptions are rarely acknowledged, let alone articulated, by political philosophers or philosophers of education. To correct this, I propose a novel metatheory that shows the systematic connections between the social ontology, normative commitments, and political programs of our dominant approaches to education (liberal, conservative, democratic). My reconstruction illuminates several surprising agreements and differences between them. For example, it reveals that many of our most heated political debates about education, between left and right liberals, are merely intramural disagreements among thinkers committed to the same individualist ontology. The systematic reconstruction also illuminates these theories’ failure to generate a coherent vision for education. My critiques show that each approach is characterized by a flawed or incomplete social theory which prevents it from promoting its own values and fulfilling its aims for education. In the case of liberal theories, I show that the liberal goal of cultivating autonomy is selfundermining in light of liberal theory’s individualist social ontology. In the second chapter, I turn to critical theories, which focus on the function of education in reproducing our broader social system. Whereas the dominant approaches start by asking about the nature and goals of education in general, critical theories analyze our contemporary educational systems under specific political and economic conditions. They reveal how schools contribute to perpetuating an oppressive and unjust social system. In other words, the focus of these theories is not on the school as a standalone institution, but as a particularly important subsystem in a larger process of social reproduction. While they are promising in many ways, I nevertheless argue that critical theories of education also have distinct limitations. In particular, even though their social theory and normative commitment are more compelling than the dominant views’, they do not satisfactorily translate these into practical proposals for remaking our systems of education. Having found none of the existing approaches fully satisfactory, I start developing the positive and evaluative dimensions of my own view in the third chapter. I go beyond critical social theory while relying on the broad strokes of its ontology of the human. My aim is to supplement this ontology by drawing on both empirical social studies and complexity theory to more precisely characterize the social relations and practices that constitute the domain of education. More specifically, I argue that we can best understand the educational subsystem by attending to its overlap and co-integration with the family, the state, and economic production. Schools are the mediating institutional domain between the family on one hand and the polity and economic production on the other. At the evaluative level, I articulate three critiques of social pathologies that I believe have been ignored or underutilized in critical education studies: alienation, commodification, and fragmentation. Alienation refers to a pathological relation of disconnection from one’s own learning, other students, and teachers. Commodification and fragmentation, on the other hand, are problems with the organization and distribution of resources in the education system. In my fourth and final chapter, I propose a new program for education that seeks to overcome some of the barriers faced by other systematic theories of education. Attempting to counter the problems I diagnosed and explained in the third chapter, I argue for a few different kinds of interventions. First, I propose restructuring the educational system in order to resist fragmentation by pursuing a unified distributive pool, consolidating school districts, and abolishing charters. Second, I argue for a reconfiguration of the co-integrated subsystems of the family, the school, and production that seeks to empower both children and those involved in their care to be involved in free, meaningful work. Finally, I articulate a set of classroom-level practices that seek to equalize access to development for individual students while cultivating their collective social and political imagination. One of the long-term goals is to make schools into democratic institutions of social learning, through which we strive to remove social blockages such as ideology and reflexivity deficits, in order to collectively solve political problems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163277</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives</title>
<link>https://hdl.handle.net/1721.1/163276</link>
<description>Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives
Kaashoek, Justin H.
Large language models (LLMs) can perform a wide range of search and optimization tasks over discrete spaces. This work seeks to explore the limits of LLM-guided search. We construct a set of text optimization tasks with different levels of "intuitiveness'' and evaluate whether LLMs can effectively optimize objectives. We show that the LLM's performance depends not only on its intuition for the objective, but also on the alignment between the objective and its priors. We also find that the LLM can successfully optimize an objective even without an explicit description of the objective. Our results largely focus on greedy search strategies; we develop a theoretical characterization of conditions under which greedy search is optimal, meaning the LLM's failures result from a fundamental inability to take gradient-like steps, not suboptimal search.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163276</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Semantic Account of Distributional Constraints on Temporal in-Adverbials</title>
<link>https://hdl.handle.net/1721.1/163275</link>
<description>A Semantic Account of Distributional Constraints on Temporal in-Adverbials
Rouillard, Vincent S
Temporal in-adverbials (TIAs) are a class of English expressions that can be exemplified with in three days. They are remarkable in that, depending on the syntactic position they occupy, TIAs are subject to very different distributional constraints. In some configurations, their licensing is conditioned by the lexical aspect of verbal predicates. In others, these expressions are negative polarity items. Though both varieties of TIAs have been discussed extensively in the semantics literature (Gajewski, 2005, 2007; Hoeksema, 2006; Iatridou and Zeijlstra, 2017, 2021; Krifka, 1989, 1998), no attempt has been made to understand the relationship between the two. I offer a unified semantic analysis of TIAs, which derives from semantic principles their eclectic distributional constraints.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163275</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metrical Grids and Active Edges</title>
<link>https://hdl.handle.net/1721.1/163274</link>
<description>Metrical Grids and Active Edges
Asherov, Daniel
Theories of word stress assignment differ in the kind of representations they adopt. One family of theories takes stress to be assigned by grouping stress-bearing elements into small units below the level of the word (typically, metrical feet), such that one element in each unit is marked as stronger, hence stressed (e.g., Liberman and Prince 1977; Hayes 1980). Another family of theories, often referred to as grid-only, models stress assignment without appealing to feet or similar bracketed representations above the syllable (Prince 1983; Selkirk 1984; Gordon 2002).&#13;
While the grid-only approach generates the attested languages with relatively simple representations, it also generates a host of patterns which are very different from those attested in human languages (Hayes 1995; Kager 2012; also see Stanton 2016).&#13;
This thesis aims to solve a set of overgeneration problems that arise in the grid-only approach. The solution involves three components. The first is a novel class of constraints that are sensitive to word edges but unspecified to the edge they apply to (left or right). The value of this edge, considered the “active” edge, is determined by the ranking between two competing constraints (cf. Richards 2016). The second component involves a specific characterization of alignment constraints and the crucial exclusion of computationally weaker or stronger alternatives. The third component is a set of principled fixed rankings between two classes of constraints. In particular, I propose that constraints sensitive to the active edge systematically outrank constraints that regulate rhythmic alternations (cf. van der Hulst 1997; 2012). The result is a grid-only theory of stress that has a significantly tighter fit to the typology compared to previous theories.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163274</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems</title>
<link>https://hdl.handle.net/1721.1/163273</link>
<description>Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems
Harjono, Hanna-Lee
Electrospray thrusters have emerged as highly promising propulsion options for small satellites due to their compact size, low weight, and power requirements. These thrusters offer precise, efficient, and scalable attitude control, making them ideal for missions requiring fine adjustments and advanced capabilities such as formation flying and docking maneuvers. However, to fully exploit the potential of electrospray thrusters, control strategies specific to them must be developed. In this work, a parameterized, PID gain-scheduled attitude controller that leverages the unique throttleability of electrospray thrusters is developed and validated. The developed controller is adaptable across operating conditions, as well as electrospray thrust coefficient values. Extensive modeling efforts are undertaken to incorporate the throttleability and operational constraints of electrospray thrusters, ensuring accurate performance predictions. The control system is simulated under various operating conditions to assess and verify its functionality and robustness against disturbance torques. Validation experiments in a magnetic levitation CubeSat testbed are proposed.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163273</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visibility in synthetic aperture radar satellite data: metric formulation, observation scheduling, and orbit design</title>
<link>https://hdl.handle.net/1721.1/163272</link>
<description>Visibility in synthetic aperture radar satellite data: metric formulation, observation scheduling, and orbit design
Kramer, Evan L.
Earth observation satellites serve as vital information gatherers for effectively addressing some of humanity’s most pressing challenges including management of limited resources and minimization of losses from disasters. Synthetic aperture radar (SAR) is a type of active remote sensing instrument that operates in the microwave portion of the electromagnetic spectrum and is a preferred Earth observation system thanks to the reliable imagery it can collect in all illumination and weather conditions. SAR data is acquired using a side-looking viewing geometry in which the radar is pointed perpendicular to the satellite platform’s direction of motion. This viewing geometry, in conjunction with the illuminated terrain’s topography, results in geometric distortions termed layover and shadow. These distortions degrade the utility of the collected imagery since they effectively obscure portions of the image and preclude extraction of actionable insights. While geometric distortions will be everpresent in SAR imagery, their location and coverage can be manipulated by controlling the relative orientation between the satellite and the illuminated topography. Such manipulation has historically been infeasible for legacy SAR satellites that collect globally consistent data sets under rigid operating requirements. However, the recent advent of commercial SAR satellite constellations has re-framed the practicality of carefully tuned observation geometries that maximize region of interest visibility. Commercial SAR constellations operate on a task-wise basis that grants data end-users flexibility in specifying desired observation parameters including acquisition times and observation geometries. However, a mismatch between on-orbit capabilities and delivered data quality exists due to a lack of formalized tools for planning observations with maximum region of interest visibility. Specifically, no systematic method for identifying visibility-favorable observation geometries exists. This dissertation addresses this gap in a stepwise approach. First, an extension of opensource radar processing software is developed that enables prediction of layover and shadow in a 2D distortion mask for any satellite-target relative geometry. Visibility metrics are then defined to represent the favorability of a particular observation geometry with respect to a distortion mask. The computation of visibility metric scores at geometries spanning the entire sample space enables creation of visibility maps that completely characterize the visibility characteristics of a given region of interest. To broaden the suitability of visibility maps for observation planning, a set of generalizable visibility maps are created to enable estimation of region of interest visibility characteristics in mission scenarios that are computationallyconstrained and information-limited. Visibility maps are then directly integrated into satellite operations by developing the first SAR observation scheduling algorithm that explicitly accounts for visibility. Finally, visibility is considered in the orbit design process to establish general guidance on optimal repeat ground track orbit parameters for pre-defined region of interest visibility characteristics. Region of interest visibility improvements of up to 90% are obtained for individual tasks when using the observation planning tools developed in this dissertation. Constellation-wide visibility improvements of 18% are achieved with modest reductions in traditional performance measures when integrating visibility into observation scheduling. Two-fold improvements in the visibility characteristics of observation opportunities are attained for orbits designed to maximize overpass geometry quality. The contributions of this dissertation are timely, given the concurrent proliferation of flexible, high-resolution SAR observation capabilities, and lay the groundwork for enabling the acquisition of SAR data that is maximally useful for limited resource management, disaster response, and other applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163272</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen</title>
<link>https://hdl.handle.net/1721.1/163271</link>
<description>Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen
Goel, Viraat Yogi
Technology transfer (TT), or the process by which a product's manufacturing is moved and scaled, is a complex business process with countless deliverables and stakeholders. This is especially true in biomanufacturing, where drug commercialization timelines are measured in years, manufacturing facilities are specially designed, and regulations must be stringently met. This systems-level complexity can create inefficiencies in the TT process, lengthening timelines and wasting resources. In this project, we use simulation modeling techniques to digitally model Amgen's Commercial Tech Transfer (CTT) process for biologic drugs. We use virtual experimentation to identify key bottlenecks in the TT workflow, quantify how workstream alterations impact project timelines, and identify process changes likely to shorten timelines. We also extend this analysis to Amgen's New Product Introduction (NPI) process, identifying how coordination between upstream and downstream processes may accelerate NPI timelines. Finally, we link this project to the ongoing development of TT data visualization dashboards.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163271</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safety Stock Modeling for a Medical Devices Supply Chain</title>
<link>https://hdl.handle.net/1721.1/163270</link>
<description>Safety Stock Modeling for a Medical Devices Supply Chain
Chong, Julie
This thesis examines the current inventory management practices at a leading manufacturer of medical devices, and identifies areas for significant improvement. The analysis reveals inefficiencies in safety stock management, with finished goods inventories being excessively high and raw material stocks being underestimated. The study applies single-echelon and multi-echelon inventory modeling to demonstrate potential cost savings through optimized safety stock levels. Additionally, it highlights the importance of reevaluating high service level targets and improving forecasting accuracy to reduce reliance on costly countermeasures. The thesis also emphasizes the need for effective management of component lead times and enhanced data visibility. Recommendations include transitioning to data-driven safety stock calculations, adopting multi-echelon inventory optimization, reassessing service level targets, enhancing forecasting accuracy, and improving component lead time management. By implementing these strategies, the company can enhance operational efficiency, reduce costs, and build greater resilience in its supply chain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163270</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding home broadband coverage through existing Low Earth Orbit megaconstellations</title>
<link>https://hdl.handle.net/1721.1/163269</link>
<description>Expanding home broadband coverage through existing Low Earth Orbit megaconstellations
Gonzalez Martinez, Gretel
Expanding broadband access to underserved areas continues to be a significant challenge for Internet Service Providers (ISPs). While its services perform well in high-density regions, they face scalability limitations in sparsely populated areas where infrastructure costs must be spread across a smaller customer base. This study explores the potential of Low Earth Orbit (LEO) satellite megaconstellations as a scalable solution for extending broadband coverage in the United States. By analyzing the technical capabilities, deployment timelines, and economic feasibility of partnering with LEO satellite providers, this research offers a strategic framework for integrating satellite broadband into ISPs service portfolio.&#13;
&#13;
A customer demand model identifies approximately 17 million unserved households within the addressable market of one of the largest U.S. telecommunications companies. The business case assessment evaluates broadband profitability by optimizing customer base size relative to proximity to existing infrastructure. While fiber optics remains the most profitable solution in high-density areas and fixed wireless access effectively utilizes excess 5G capacity, both require substantial infrastructure investment, limiting their feasibility for rural broadband expansion. In contrast, a satellite broadband partnership emerges as the most cost-effective solution for at least 1 million households, surpassing the profitability of currently existing offerings. With minimal capital investment, satellite technology enables rapid customer acquisition and scalable nationwide expansion. The analysis highlights that wholesale agreements play a critical role in profitability and the need to secure a minimum revenue share of 16.5% to reach the break-even point.&#13;
&#13;
Performance modeling and curve approximation techniques estimate that if Kuiper meets Federal Communications Commission (FCC) deployment milestones, it could serve 8.5 million customers by 2026, with full nationwide coverage projected by 2029. Under a 200x oversubscription model, Kuiper’s total subscriber capacity could scale to 32.8 million, demonstrating its ability to complement current broadband o!erings. While LEO broadband networks can achieve capacities in the tens of Tbps, they remain far below fiber networks, which operate in the thousands of Tbps. Rather than competing directly, satellite broadband is positioned as a complementary solution, addressing connectivity gaps in rural and underserved&#13;
regions.&#13;
&#13;
To capitalize on these findings, this study recommends leveraging existing LEO megaconstellations to expand broadband coverage nationwide. A phased rollout should begin with a beta program in California, the state with the highest number of unserved households, to validate network performance and optimize deployment for broader expansion. Partnering with an&#13;
existing LEO megaconstellation could e!ectively bridge the digital divide in rural areas, expand service offerings, and enable a stronger position in the growing satellite broadband market.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163269</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives</title>
<link>https://hdl.handle.net/1721.1/163268</link>
<description>Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives
Kim, Jason Gwanhee
This study examines the determinants of firms adopting performance-vesting long-term incentive (PLI) awards, a rapidly growing form of executive compensation. Using data provided by Equilar on Russell 3000 firms, I investigate how a firm's contracting environment and inter-firm networks influence the adoption and design of PLI awards. I find that stock liquidity and analyst coverage significantly increase the likelihood of adoption by enhancing the informativeness of performance measures. The findings suggest that firms adopt PLI awards to better align managerial incentives with shareholder interests, focusing on the measures that are both reliable and strategically aligned. I also show that board interlocks, particularly those involving compensation committee members, and shared compensation consultants play a significant role in facilitating the diffusion of PLI awards across firms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163268</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Content Moderation Interventions for Addressing Online Misinformation</title>
<link>https://hdl.handle.net/1721.1/163267</link>
<description>Essays on Content Moderation Interventions for Addressing Online Misinformation
Martel, Cameron
In Chapter 1, I examine the efficacy of fact-checker warning labels as a content moderation intervention for addressing online misinformation. Warning labels from professional fact-checkers are one of the most historically used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? In a first correlational study, we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in, and sharing of, false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in, and sharing of, false news even for those most distrusting of fact-checkers. Our results suggest fact-checker warning labels are a broadly effective tool for combatting misinformation.&#13;
&#13;
In Chapter 2, joint with Jennifer Allen, Gordon Pennycook, and David G. Rand, I investigate the potential of crowdsourced fact-checking systems to identify misleading online content. Social media platforms are increasingly adopting crowd-based content moderation interventions for identifying false or misleading content. However, existing theories posit that lay individuals can be highly politically biased, and that these strong political motivations inherently undermine accuracy. Alternatively, we propose that political and accuracy motivations may, in some cases, operate in tandem – in which case politically motivated individuals need not hamper truth discernment. We empirically assess this by analyzing a survey study of misinformation flagging and field data from X’s Community Notes. Consistent with a simple model of flagging behavior, posts that are both false and politically discordant are flagged the most. Importantly, we find that more politically motivated users flag a greater number of posts, engage in more politically biased flagging, and yet exhibit the same or better flagging discernment. Together, these results show that politically motivated individuals are integral to provisioning a high overall quantity and quality of crowdsourced fact-checks.&#13;
&#13;
In Chapter 3, I assess the perceived legitimacy of different content moderation interventions for addressing online misinformation. Current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts, laypeople, or non-juries. We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163267</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Materials and Devices for Optoelectronic Packaging</title>
<link>https://hdl.handle.net/1721.1/163266</link>
<description>Materials and Devices for Optoelectronic Packaging
Weninger, Drew Michael
Over the last two decades, improvements in semiconductor manufacturing have allowed for the commercialization of silicon photonic integrated circuits with over 10,000 devices. These chips are critical to data and telecommunications networks where they convert and encode optical signals to electrical signals, and vice versa, in the form of transceivers. Scaling up the number of transceivers and optical fiber connections, or optical input/output (I/O), will be critical to meet the exponential rise in demand for cloud data capacity since 2010.  However, the costly process of active alignment and bonding of optical fiber arrays directly to photonic chips presents a barrier to their high volume packaging and assembly. This approach limits optical I/O density to a maximum of 8 connections per millimeter since optical fibers for communications applications have cladding diameters of 125 micron.&#13;
&#13;
To address this challenge, this thesis explored a new field of silicon integrated photonics involving chip-to-chip (i.e. flip-chip) optical coupling. Evanescent chip-to-chip couplers were designed, fabricated, packaged, and tested for directly connecting silicon photonic chips to other silicon photonic chips, interposers, or printed circuit boards using automated assembly. The design's compact footprint allows for coupler pitches below 10 micron, or an optical I/O density of greater than 100 connections per millimeter, to be realized - an order of magnitude improvement over fiber-to-chip connections. By designing the coupler to use silicon materials and back-end-of-line compatible packaging processes, ease of integration with existing microelectronic foundry tool sets was ensured. Results from an experimental flip-chip coupler prototype showed greater than 90% coupling efficiency with micron scale alignment tolerances when coupling from silicon nitride to silicon-on-insulator waveguides, the first demonstration of such a device. &#13;
&#13;
To further improve optical flip-chip coupler performance, designs were proposed for combining the evanescent coupler with an integrated graded index lens using silicon oxynitride films. Such a device would provide a universal coupling interface in silicon photonics for both chip-to-chip or fiber-to-chip connections. Simulations showed sub-dB coupling loss across all interfaces including flip-chip coupling across a 10 micron gap. Initial fabrication processes were established to deposit, pattern, and etch greater than 10 micron thick silicon oxynitride graded index lenses on silicon and glass substrates. In showing that automated pick-and-place tools can be used for photonic chip assembly, this work represents a critical step in eliminating active alignment and sustainably scaling optical I/O in future transceiver packages.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163266</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Selecting for and Selecting Despite: A Javanese case study</title>
<link>https://hdl.handle.net/1721.1/163265</link>
<description>Selecting for and Selecting Despite: A Javanese case study
Lesure, Cora
This is an investigation of the argument structure of Javanese (Austronesian, Indonesia) which focuses on the distribution of four core derivational morphemes: the Actor Voice prefix, and the suffixes -ake, -i, and -an. The project is based on original consultant work conducted with a speaker of the Central dialect of Javanese. The work establishes language internal diagnostics for various aspects of a stem's lexical semantics and lexical category and then utilizes these criteria to analyze a wide variety of morphological derivatives, both verbal and nominal. The resulting analysis is able to predict the distribution of derivational morphemes and the nature of their resulting derivatives to a higher degree than what was previously understood to be possible.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163265</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering Mandarin Speaker Knowledge with Language Game Experiments</title>
<link>https://hdl.handle.net/1721.1/163264</link>
<description>Uncovering Mandarin Speaker Knowledge with Language Game Experiments
Fu, Boer
Mandarin Chinese offers many intriguing puzzles for linguists because it has a shortage of morphophonological alternations. This has resulted in indeterminacy in various aspects of its phonological grammar, triggering much debate on syllable structure and allophonic mapping. The ambiguity of the data is also a problem for children acquiring Mandarin since alternative grammars can account for the surface forms equally well.&#13;
&#13;
In order to find out what Mandarin speakers have learned about the phonology of their language, I conducted two language game experiments based on fanqie secret languages. It was found that markedness and faithfulness constraints are psychologically real for Mandarin speakers. Furthermore, the interactions between markedness and faithfulness constraints are shown to have an effect on glide movement in the language game. In addition, much speaker variation was observed in the experiment. I demonstrate that it is the result of constraint ranking variation. Nevertheless, general population-level trends on constraint ranking could still be identified. These trends lead to insights on phonological learning beyond Mandarin, showing evidence for naturalness bias and lexicon optimization.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163264</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Business Value of Enterprise Digital Architecture</title>
<link>https://hdl.handle.net/1721.1/163263</link>
<description>Business Value of Enterprise Digital Architecture
Venkata Aditya, Saraswatula (Adi SV)
Digital technologies are fundamentally reshaping markets and organizations globally. This thesis is exploratory research that seeks to explain how large multi-regional and global enterprises determine, prioritize, measure, and manage business value outcomes of digital investments over time. I examine the value construct of digital initiatives in firms from different industries by interviewing various stakeholders. Insights surfaced from this primary research are analyzed in conjunction with the concepts from current literature. Qualitative findings are proposed, and a list of value metrics is presented that can serve as a future reference for firms. A causal loop diagram is proposed to visualize firm capabilities and value dynamics.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163263</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-Aware Reinforcement Learning with Safety Constraints</title>
<link>https://hdl.handle.net/1721.1/163262</link>
<description>Risk-Aware Reinforcement Learning with Safety Constraints
Feng, Meng
Safety is a critical concern in reinforcement learning (RL) and learning-based systems more broadly, as ensuring reliable and safe decision-making is essential for their deployment in real-world applications. Traditional approaches to address safety often rely on techniques such as reward shaping, carefully curated training data, or explicit handcrafted rules to avoid unsafe actions. More recent advancements have adopted the Constrained Markov Decision Process (CMDP) framework, which trains agents while explicitly enforcing constraints on auxiliary measures such as safety or risk. However, these methods often suffer from significant constraint violations. This thesis identifies the root cause of such violations as stemming from the pursuit of maximal task performance in each policy update. Given the inherent limitations of sample-based constraint assessments in RL, where data is limited and approximation errors are inevitable, these methods often fail near constraint boundaries, leading to excessive violations. To address this, we propose a novel constrained reinforcement learning algorithm that dynamically adjusts its conservativeness during policy updates. By incorporating the risk of constraint violation into the update process, our method can shift focus toward constraint satisfaction when violations are likely, while still striving to improve task performance whenever feasible. Our algorithm reduces constraint violations by up to 99% compared to state-of-the-art baselines while achieving comparable task performance. In the second part of this thesis, we extend CMDPs to address multi-goal, long-horizon problems. We augment the CMDP formulation to incorporate goals, enabling it to handle multiple goals while preserving goal-independent constraint specifications in CMDP. To tackle the complexity of long-horizon tasks with high-dimensional inputs (e.g., visual observations), we propose a method that integrates planning with safe reinforcement learning. By leveraging deep reinforcement learning, we acquire the essential components for planning, including a low-dimensional state-space representation and planning heuristics. The planning algorithm then decomposes long-horizon problems into a sequence of shorter, easier subgoal-reaching tasks. The learned agents safely navigate toward these subgoals step by step, ultimately reaching the final goal. We evaluate our method on both single-agent and multi-agent tasks. In 2D navigation, our approach demonstrated up to 74.2% risk reduction, while in visual navigation, it achieved up to 49.3% risk reduction, all while reaching comparable or better success rates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163262</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain</title>
<link>https://hdl.handle.net/1721.1/163261</link>
<description>Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain
Oludipe, Lanre
The increasing demand for faster consumer delivery has led retailers to establish smaller regional distribution centers alongside traditional main distribution centers (MDCs). However, the limited capacity of some of these regional centers heightens the need for precise inventory forecasting and deployment to minimize excess inventory, particularly when few viable outlets exist for excess inventory. This research examines strategies to mitigate excess inventory at regional centers through inventory rebalancing, the integration of additional outlets, and modifications to existing inventory policies. A Monte Carlo simulation was conducted to compare the performance of the current system with a modified system incorporating these enhancements. The results showed that the modified system improved capacity utilization and reduced inventory deployment from the MDC without affecting margin. These improvements can enable more agile operations at smaller regional centers, reduce inventory buildup, and reduce the pressure of precise inventory deployment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163261</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications</title>
<link>https://hdl.handle.net/1721.1/163260</link>
<description>Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications
Knapp, Rachael
The global shift to electric vehicles (EVs) is progressing rapidly, driven by the need to reduce greenhouse gas (GHG) emissions and global reliance on fossil fuels. However, fleet electrification presents unique challenges, particularly in regard to rolling out the necessary charging infrastructure and operational efficiency. This study examines how various depot-based fleet charging strategies impact up-front capital and long-term operational expenditures. The operational feasibility of each method is evaluated through the use of a discrete event simulation. The study incorporates fleet data to assess the time required to charge the fleet, the number of chargers needed, and the number of associates needed to operate manual strategies. The analyzed charging methods include dedicated level 2 charging, vehicle swapping, level 2 cable swapping, level 3 cable swapping, sequential and simultaneous charging. Key findings indicate that while a 1:1 vehicle-to-charger ratio ensures charging reliability within the designated time, it incurs the highest capital costs. Alternative strategies, such as cable swapping and simultaneous charging, significantly reduce costs while successfully charging the fleet within the charging window.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163260</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs</title>
<link>https://hdl.handle.net/1721.1/163259</link>
<description>An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs
Kasliwal, Mohit
This thesis presents an integrated optimization framework designed for the large-scale deployment of electric vehicles (EVs) within commercial fleets, specifically focusing on balancing emissions reduction and operational cost efficiencies. Utilizing Verizon’s extensive fleet of over 10,000 light-duty vehicles across 1,000 sites as a case study, the research addresses the challenges and complexities in effective site selections for such a large and dispersed fleet. &#13;
The research involved developing and testing several optimization models under varying scenarios, including scenarios prioritizing maximum operational savings, maximum emissions reduction, and a hybrid model employing an internal cost of carbon (ICC) to balance both operational and environmental objectives. The model essentially develops a ranking system for sites – suggesting which sites to electrify in which year and order, with how many EV conversions (from existing ICE vehicles) at each site.&#13;
The results highlight the importance of tailoring EV deployment strategies to site-specific conditions, such as unique vehicle usage patterns, grid emissions profiles, regional operational costs, and available incentives. Particularly, smaller sites were found to offer greater relative benefits in terms of both cost savings and emissions reductions per unit of capital invested due to their high average mileage, making them strategic priorities for early electrification.&#13;
Operational feasibility was also thoroughly examined, recommending practical constraints such as limiting the number of sites electrified annually to ensure project manageability and effectiveness. &#13;
Sensitivity analyses addressed critical uncertainties such as battery degradation over the vehicle lifespan and the impact of extreme weather on EV performance. These analyses underscore the necessity of conservative battery range buffers ("safe ranges"). Robust load management strategies can be deployed to significantly reduce demand charges and optimize charging schedules based on time-of-use rates where available.&#13;
Recommendations from the study advocate for implementing a hybrid optimization strategy incorporating an ICC based on corporate goals, continuous adaptive management informed by ongoing data collection, and strategic infrastructure investments to future-proof EV deployments. Policy alignment is also critical to enhance economic viability via incentives and ensure regulatory compliance.&#13;
Finally, the thesis proposes future research directions, including investigation of advanced load management and integration with renewable energy sources, exploring bi-directional charging to add revenue streams, incorporating marginal operating emissions rate (MOER) data to further reduce grid emissions and exploring the resilience of EV fleets to power outages. These initiatives aim to further enhance strategic decision-making and ensure the long-term sustainability and efficiency of fleet electrification programs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163259</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Breaking the Chain: Building Resilience in the Insurance Value Chain</title>
<link>https://hdl.handle.net/1721.1/163258</link>
<description>Breaking the Chain: Building Resilience in the Insurance Value Chain
Chuah, Chung Jin
This thesis examines how strategic transformation approaches reshape the resilience of the Property &amp; Casualty (P&amp;C) insurance industry in the light of ongoing technological disruption, climate change, and regulatory pressures. Through empirical analysis of 9 insurers, the study reveals that while all transformation types improve performance, phased 'test-refine-execute' strategies achieve superior outcomes by combining operational focus with strategic agility. The research identifies four implementation levers: (i) digital modernization, (ii) phased transformation execution, (iii) resource-allocation agility, and (iv) aligned leadership - which together explain why some transformations succeed where others fail."
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163258</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Domain Adaptation of VLM for Soccer Video Understanding</title>
<link>https://hdl.handle.net/1721.1/163257</link>
<description>Domain Adaptation of VLM for Soccer Video Understanding
Jiang, Tiancheng(Tony)
Vision Language Models (VLMs) have demonstrated strong performance in multi-modal tasks by effectively aligning visual and textual representations. However, most video under- standing VLM research has been domain-agnostic, leaving the understanding of their transfer learning capability to specialized domains underexplored. In this work, we address this by exploring the adaptability of open-source VLMs to specific domains, and focusing on soccer as an initial case study. Our approach uses large-scale soccer datasets and LLM to create instruction-following data, and use them to iteratively fine-tune the general-domain VLM in a curriculum learning fashion (first teaching the model key soccer concepts to then question answering tasks). The final adapted model, trained using a curated dataset of 20k video clips, exhibits significant improvement in soccer-specific tasks compared to the base model, with a 37.5% relative improvement for the visual question-answering task and an accuracy improvement from 11.8% to 63.5% for the downstream soccer action classification task.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163257</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization</title>
<link>https://hdl.handle.net/1721.1/163256</link>
<description>Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization
Garber, Jeremy
This thesis analyzes and validates autonomous Finished Vehicle Logistics (FVLa) operations, at the plant of an automative Original Equipment Manufacturer (OEM), through the development of a Vehicle-Plug-In (VPI) system with Level 4 autonomous driving capabilities. The research combines process flow analysis with FlexSim simulation modeling to optimize operational parameters and assess safety performance. Results demonstrate FVLa operational feasibility with a recommended VPI inventory of 750 units and 6-hour replenishment cycle. The study's key contributions include a validated operational model using Economic Order Quantity calculations and a safety framework utilizing Bayesian Networks, establishing foundations for the planned 2028 implementation while maintaining required throughput rates and safety standards.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163256</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonized Cement Manufacturing via Advanced Production Technologies</title>
<link>https://hdl.handle.net/1721.1/163255</link>
<description>Decarbonized Cement Manufacturing via Advanced Production Technologies
Norwalk, Michael
Cement production is the second-largest source of industrial carbon dioxide emissions world-wide. Due to the chemical reactions inherent in its production and the temperatures required to drive those reactions, cement is considered a “hard-to-decarbonize” industry. In this study, three emerging technologies to reduce the carbon intensity of industrial processes, namely, direct high-temperature electric process heat, electric process heat utilizing thermal storage, and liquid amine-based carbon capture are assessed in the context of a greenfield cement production facility relative to a new-build conventional cement plant fueled with natural gas. Cement plants utilizing this set of technologies were modeled in five U.S. geographies to determine the relative economic returns. The economics were assessed, inclusive of available economic incentives, both for the scenario in which the cement produced is sold in the U.S. market and for the scenario in which the cement produced is exported to the European Union (E.U.) market to assess potential benefits from the E.U. carbon pricing system. The analysis indicates that at current technology prices, the economic returns of the assessed technologies, while in some cases profitable, continue to lag those of conventional production technology for the domestic U.S. market. As costs come down as technology is deployed, the economics of carbon capture solutions have the potential to be competitive with conventional technology. The E.U. carbon emissions penalties are effective in altering the economics in such a way that implementing carbon capture systems becomes the most attractive economic option, demonstrating the power of carbon emissions markets. With increased technology deployment as well as the adoption of targeted incentives in the U.S. market, the adoption of low carbon cement production technologies can be accelerated.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163255</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Obscured universality in Mandarin</title>
<link>https://hdl.handle.net/1721.1/163254</link>
<description>Obscured universality in Mandarin
Chen, Fulang
In this dissertation, I investigate the apparently distinctive syntactic properties associated with the BEI-construction, the BA-construction, and resultative constructions in Mandarin Chinese, which I argue obscure properties that are universal across natural languages. In the case of the Mandarin BEI-construction, it exhibits both passive-like and tough-movementlike properties. I argue for a novel analysis of the BEI-construction as a passive construction, where the passive head/BEI hosts a composite probe [&#120601; + Ā], which triggers composite A/Ā-movement, in the sense of Van Urk (2015). The subject in the BEI-construction is derived via (successivecyclic) composite A/Ā-movement, followed by a terminating step of A-movement, similar to Longenbaugh’s (2017) analysis of English tough-movement. Under the proposed analysis, the mixed A/Ā-properties associated with the BEI-construction are direct consequences of composite A/Āmovement (following Van Urk 2015; Longenbaugh 2017). In the case of the Mandarin BA-construction, it involves an apparently pre-posed noun phrase (the post-BA NP) with an affectedness interpretation, which can be identified with either the subject of a resultative phrase in a complex predicate or the direct object of a simple transitive verb. I argue for a novel analysis of the Mandarin BA-construction as a causative construction, where the causative head, which selects a predicate of the caused/resulting event and projects a predicate of the causing event (following Pylkkänen 2002, 2008), has two additional arguments: a causer and a causee. The post-BA NP, as the causee argument of the causative head, also controls a PRO subject in a resultative phrase (following Huang 1992), which is overt in a complex-predicate BAconstruction and is phonologically null in a simple-transitive BA-construction (following Sybesma 1992, 1999). The post-BA NP is interpreted as being affected in the causing event, in the sense that it is caused to perform an action or undergo a change of state (following Alsina 1992). Lastly, in Mandarin, there is no apparent unaccusative-unergative distinction in resultative constructions, unlike languages like English, where distinctions between resultative constructions with unaccusative and unergative matrix verbs follow from the Unaccusativity Hypothesis (Perlmutter 1978; Burzio 1986) and general principles such as the Direct Object Restriction (Simpson 1983; Levin &amp; Rappaport Hovav 1995) and Burzio’s generalization (Burzio 1986). I argue that resultative constructions in Mandarin are causative constructions, where the causative head has four possible argument structures, depending on whether the matrix verb is unaccusative, unergative, or transitive, as well as the semantic relation between the matrix subject and the matrix verb (and between the post-verbal NP and the matrix verb). Despite the fact that the argument structure of the causative head obscures the argument structure of the matrix verb, I argue that in Mandarin resultative constructions, the sole argument of an unaccusative matrix verb is always a causee argument, whether or not an additional causer external argument is present, while the sole argument of an unergative matrix verb, which is a causer external argument otherwise, is a causer argument when the causer is an internal argument. The dissertation showcases how Mandarin provides insight in defending and expanding our knowledge of cross-linguistic properties such as passivization (which embodies Burzio’s generalization and feature-driven movement), composite probing, the bi-clausal syntax and bi-eventive semantics of causative constructions, as well as the nature of affectedness (in causative constructions) and implications for the Unaccusativity Hypothesis and the Uniformity of Theta-Assignment Hypothesis (Baker 1988).
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163254</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-Polarity Ion Electrospray Propulsion</title>
<link>https://hdl.handle.net/1721.1/163253</link>
<description>Single-Polarity Ion Electrospray Propulsion
Shaik, Saba Zareen
Electrospray thrusters are highly efficient spacecraft propulsion devices that accelerate ions sourced from ionic liquid propellants to produce thrust. Typically, electrosprays are fired in a dual-polarity configuration in which the polarity of the ion beam is periodically reversed. This strategy is difficult to implement and imposes limitations on system size and performance. We instead propose a single-polarity design where negative ions are emitted continuously from the thruster, enabling extreme miniaturization, faster startup, better emission stability, and simpler power processing. This thesis investigates two challenges associated with the single-polarity design. First, system lifetime is of principal importance for electrospray propulsion systems in general and must be verified for a single-polarity implementation. Long-duration electrospray tests are performed, demonstrating that single polarity thrusters achieve comparable lifetimes and performance to state of the art systems with high mass utilization and minimal hardware degradation. An additional challenge is propellant electrochemistry, triggered when positive counterions accumulate in the ionic liquid. A suite of experiments is conducted to identify and characterize electrochemical processes, including electrical double-layer potential evolution and gas-phase product formation, in electrospray thrusters over long firing durations.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163253</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China</title>
<link>https://hdl.handle.net/1721.1/163252</link>
<description>Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China
Zhang, Hanxue
Semiconductors are fundamental to Artificial Intelligence(AI) and central to global technological competition. Against this backdrop, this thesis compares semiconductor primary investment environments in the United States and China, examining their implications for industry development and innovation. The study employs a mixed-methods approach, combining expert interviews, data analysis, and natural language processing (NLP). It draws on primary market investment, M&amp;A deals and government grants data to examine capital structures, investment stages, sectoral focus, and exit efficiency. Furthermore, it analyzes nearly 3,000 semiconductor industry reports(2020-2025) to identify evolving strategic priorities and thematic trends shaping these environments. Findings reveal that China’s state-led, vertically integrated model prioritizes upstream capacity building and supply chain autonomy, supported by government guidance funds, private capital, and policy-driven mechanisms. However, there remains a significant gap in leading-edge chips, necessitating precise investments and patient capital to bridge this divide. While the U.S. ecosystem, shaped by major technology firms and federal support, focuses on design innovation and cutting-edge technologies. However, structural constraints such as limited exit pathways, fragmented fabrication capacity, and insufficient industrial policies may hinder the U.S. in nurturing innovation-driven small and medium-sized enterprises (SMEs) in the semiconductor industry. This thesis highlights the structural divergence between the U.S. and China’s semiconductor ecosystems by examining policy, primary market capital, and investment dynamics. It offers policymakers and investors a strategic overview of how these forces shape innovation and resilience, while identifying emerging investment priorities and future development paths.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163252</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forecasting Automotive Production Volume Using Regression and Time Series Modelling</title>
<link>https://hdl.handle.net/1721.1/163251</link>
<description>Forecasting Automotive Production Volume Using Regression and Time Series Modelling
Gong, Yutao
Accurate forecasting of automotive production volumes is a critical capability for suppliers navigating an increasingly volatile industry. Overly optimistic forecasts, particularly from Original Equipment Manufacturers (OEMs), lead to misallocated capacity and lost opportunities across the supply chain. This thesis investigates whether advanced statistical models can improve upon benchmark industry forecasts and provide automotive suppliers with more reliable, practical tools for demand planning. Several forecasting methodologies are evaluated, including ARIMA, standard linear regression, Lasso regression, Theta model, and a hybrid Boosted Theta model. Models are tested across North America, Europe, and Greater China using 2000-2024 vehicle production and macroeconomic data. Results show that Theta model outperforms industry forecasts across both 1-year and 5-year horizons in North America and Europe. Its simplicity, low data requirements, and robustness to market volatility make it suitable for industrial use. The model was successfully implemented at Commonwealth Rolled Products, an aluminum rolling mill in Kentucky, portfolio company of American Industrial Partners (AIP), where it was adopted for 2025 planning and drove a shift towards data-centric forecasting practices. This research presents a real-world example of applying academic techniques to solving actual business problems, serving as a valuable reference for suppliers seeking to improve forecast accuracy and operational planning in the evolving automotive landscape.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163251</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of university venture funds in supporting early-stage Japanese startups</title>
<link>https://hdl.handle.net/1721.1/163250</link>
<description>The role of university venture funds in supporting early-stage Japanese startups
Brillaud, Nami
This thesis explores how university venture funds in Japan are uniquely positioned to turn the country’s innovation capacity into entrepreneurial capacity by supporting early-stage startups. While Japan consistently ranks high in research output, much of this potential is not being translated into successful entrepreneurship. Risk capital is scarce compared to other ecosystems, particularly for deep tech, and support systems for early-stage startups are still limited. University venture funds – which inherently connect universities, entrepreneurs, and risk capital – are well positioned to bridge this gap. Yet despite their growing relevance, their evolving role in supporting Japanese early-stage startups is understudied.&#13;
&#13;
This study compares university venture funds with different profiles – ranging from leading and longstanding funds like UTEC, to public-private venture funds established through government initiatives, to recent funds with diversified structures – analyzing how they are structured, how they invest, and what results they have seen so far. It then builds on startup examples and interviews with university venture funds to identify how these funds can better support early-stage startups through improved fund operations, stronger pre-seed support, as well as a strategic approach to growth and exits. Ultimately, this thesis advocates for actionable solutions informed by global practices but adapted to Japan’s unique startup ecosystem.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163250</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Procurement Data for Cost Saving Application</title>
<link>https://hdl.handle.net/1721.1/163249</link>
<description>Analyzing Procurement Data for Cost Saving Application
Pan, Haoting
In an increasingly data-driven business environment, procurement analytics plays a critical role in optimizing costs and improving supply chain efficiency. This thesis examines the development and implementation of the Lifecycle Cost Management (LCM) tool at Caterpillar Inc., a global leader in heavy equipment manufacturing. Given Caterpillar's decentralized procurement structure, managing cost-saving initiatives across its 150 facilities (Caterpillar | Caterpillar Frequently Asked Questions (FAQs), n.d.) and 28,000 suppliers (Caterpillar | Caterpillar at a Glance, n.d.) poses a significant challenge. The LCM tool leverages machine learning models to identify overpriced purchase orders (POs) and generate actionable cost-saving opportunities.&#13;
This study explores the methodology used to enhance LCM's predictive capabilities, including data sourcing and cleaning, feature engineering, model selection, and validation. Various regression models, clustering techniques, and machine learning algorithms, such as Random Forest and XGBoost, are tested to identify cost outliers. A validation process is implemented to ensure that flagged outliers are cost-saving opportunities appropriate for execution.&#13;
Beyond technical development, the thesis addresses the processes of digital tool adoption within Caterpillar’s procurement teams. A change management approach is employed, incorporating buyer interviews, stakeholder engagement, and iterative user experience (UX) improvements. Through case studies, the study highlights the machine learning model performance and tangible financial impact of LCM. &#13;
The LCM tool has identified more than $100M data-driven potential savings, and hopes to realize 20% of the savings. Because Caterpillar’s procurement contracts are often long-term, these savings can be considered perpetual. &#13;
Findings indicate that while machine learning models effectively identify cost outliers, their success is contingent on robust data governance, stakeholder buy-in, and integration into procurement workflows. The study underscores the importance of data management, organizational alignment, and continuous refinement of digital procurement tools. Future work recommendations are enhancing data infrastructure, integrating AI-driven contract management and analysis, and refining cost estimation methodologies. The insights gained contribute to the broader application of procurement analytics and digital transformation in manufacturing enterprises.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163249</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment</title>
<link>https://hdl.handle.net/1721.1/163248</link>
<description>Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment
DiDio, Isabella
Advancements in visual inspection technologies and machine learning algorithms present Johnson &amp; Johnson Vision with an opportunity to enhance quality control for Acuvue contact lenses, addressing inefficiencies such as unnecessary scrap, customer complaints, and lead time variability. With over 5 billion lenses produced annually across 100 manufacturing lines, the proposed inspection implementation of advanced camera optics and machine learning aims to improve defect detection accuracy, minimize manual inspection, and reduce customer complaints.&#13;
An impact evaluation and prioritization framework was developed to strategically implement these upgrades across 100 manufacturing lines, integrating historical data analysis, financial modeling, and engineering risk assessments. Key findings highlight that complaint reduction, scrap savings, and labor cost reductions are the primary drivers of cost savings, with inventory savings offering incremental benefits over time.&#13;
In conclusion, this research demonstrates the process of integrating advanced technologies into manufacturing processes. By aligning engineering solutions with strategic business objectives, the findings provide actionable insights for managing large-scale technological upgrades across global networks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163248</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI Through the Viewfinder: Reimagining the Camera as a Tool for AI Image Generation</title>
<link>https://hdl.handle.net/1721.1/163247</link>
<description>AI Through the Viewfinder: Reimagining the Camera as a Tool for AI Image Generation
Shodipo, Bukunmi
The rapid emergence of artificial intelligence (AI) is causing profound shifts within the art world, reigniting age-old debates on the boundaries of what can be considered art. For example, many AI systems are employed to mimic the styles of existing artists and their works. Although this approach is deemed to be derivative and uninspiring to many people in the art world, it is also forcing us to reconsider longstanding beliefs attached to creativity such as the importance of originality and authorship. Given that AI is here to stay, this thesis explores a critical question around AI and perception, asking “How and what does AI see? Specifically, this thesis investigates the types of biases that are ingrained or embedded into AI systems, and how these biases are reflected in the output, specifically in the context of images. As part of this investigation, this thesis culminates with a prototype - an AI camera that embodies the process of AI ‘seeing the world’. This camera integrates photography with artificial intelligence, serving not only as a tool for technical exploration but also as a metaphor for examining how AI technologies offer diverse and potentially transformative perspectives on reality, much like a traditional camera. By abstracting AI technology into a camera, this project aims to start a conversation about how AI, like a camera, offers us different, sometimes biased views of the world. In doing so, the camera is redefined from a mere tool for capturing images to one that generates them, and in some cases (mis)represents human forms and identities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163247</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007</title>
<link>https://hdl.handle.net/1721.1/163246</link>
<description>The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007
Tan, Yi-Ern Samuel
In the late 1980s, Miyake Design Studio began to register patents concerning the Studio’s development of novel techniques to process pleated clothing. Their first patent, filed in 1989, was registered in designer Issey Miyake’s name, detailing the use of an industrial machine to pleat an entire garment after sewing, reversing the order of the conventional approach to creating pleated garments. In the years that followed, this entry into what I term “technical discourse” would proliferate with the Studio’s establishing of the PLEATS PLEASE brand specializing in pleated garments, and the A-POC (“a piece of cloth”) project with designer and textile engineer Fujiwara Dai. Each of these projects produced numerous patents, including a period between 1997 and 2008 I call the “Miyake Patent Explosion” when the Studio filed twenty patents with the Japan Patent Office and its international counterparts.&#13;
&#13;
In contrast to aesthetic discourses proposing the value of a work on its artistic merits and intellectual content, technical discourse points to the profusion of texts produced and circulated by the Studio—in this thesis, patents and legal claims—to uphold the utility of their products and their protection as intellectual property. By engaging with technical discourse, Miyake Design Studio were not only creating legal safeguards around the ideas it considered proprietary. Rather, their extensive production of technical discourse positioned Miyake as a figure who exceeded the boundaries of fashion, approaching its adjacent categories of unhyphenated design, architecture, and art within whose circles his objects circulate as currency.&#13;
&#13;
Exploring these texts as they are deployed in the defense of intellectual property, I argue that technical discourse can be treated as a form of historical archive that allows us to historicize claims to technological inheritance that bear upon the discussion of Miyake’s work. Specifically, I look to patents as a citational practice, or as Alain Pottage and Brad Sherman write, a “chain of reference” through which patent lawyers and engineers make deliberate connections between one technology and another to acknowledge, distinguish, and legitimize. Examining three episodes where technical discourse opens the way for historical narrative—a lawsuit over imitation goods, a case of mistaken identity in design criticism, and a moment of technological dissolution—I argue that we cannot divorce Miyake and his work from the technical complex that surrounds the Studio’s production of objects. Turning to these technical discourses that exist in the public record, I suspend the promise of monographic history that peers into the mind of the individual and probe instead the possibilities of seeing agencies beyond those attributed to the authorial figure of Miyake— his corporate apparatus, his allies, his admirers, his critics, his opponents, the receptive public.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163246</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Joint Inference of the Lexicon and Phonology Affects the Learnability of Process Interactions</title>
<link>https://hdl.handle.net/1721.1/163245</link>
<description>How Joint Inference of the Lexicon and Phonology Affects the Learnability of Process Interactions
Yang, Christopher
Contemporary phonological research has increasingly become interested in exploring the topic of learnability through the use of computational models. However, many of the proposed models lack one or more of the following properties. (1) Many models do not consider the effect of the lexicon at all on performance, and those that do fail to consider the effect contextual allomorphy has on performance. (2) Many models characterize learnability in terms of the algorithmic implementation of search, rather than a more principled relationship between the data and the hypothesis space. These properties are critically relevant when it comes to the learnability of process interactions. The experimental literature has demonstrated that artificial languages exhibiting patterns generated from certain process interactions are more likely to be successfully reproduced and generalized by participants than others (Ettlinger 2008; Kim 2012; Brooks, Pajak, &amp; Baković 2013; Prickett 2019). The historical literature has likewise noted that surface patterns generated from particular process interactions are more likely to change in systematic ways than others, including lexicalization, in which an alternation is encoded into the lexicon instead of the phonology, and reanalysis, in which a surface generalization is lost or changed entirely (Kiparsky 1968, 1971). Each of these hypotheses make different predictions when generating forms not seen during training. In this dissertation, I make the following contributions. (1) I propose a novel noisy-channel model of morphophonological learning. This model jointly infers a weighted space of consistent and nearly consistent lexicons and grammars from labelled, unparsed surface data. Predictions are generated given the entirety of the inferred weighted space. (2) I compare the predictions of the model to the results two artificial language learning experiments, which, despite involving the same underlying processes, produced contradictory results. I show that the model is able to achieve the results of both experiments under a unified account: the generalizability of a pattern is determined by the number of hypotheses compatible or nearly compatible with that pattern.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163245</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of consolidation and plastic resistance on clays</title>
<link>https://hdl.handle.net/1721.1/163105</link>
<description>Investigation of consolidation and plastic resistance on clays
Marsal, Raúl J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1944; Vita. Appendix contains numerous pamphlets.
</description>
<pubDate>Sat, 01 Jan 1944 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163105</guid>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An anthropological study based upon observations of complexion and cephalic measurements of students at the Massachusetts Institute of Technology</title>
<link>https://hdl.handle.net/1721.1/163104</link>
<description>An anthropological study based upon observations of complexion and cephalic measurements of students at the Massachusetts Institute of Technology
Fisk, Harry George.; Melluish, James George.
Thesis: B.S., Massachusetts Institute of Technology, Department of General Studies, 1896
</description>
<pubDate>Wed, 01 Jan 1896 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163104</guid>
<dc:date>1896-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design and construction of a new density photometer</title>
<link>https://hdl.handle.net/1721.1/163103</link>
<description>The design and construction of a new density photometer
Brown, Sherwood Fiske.; Perkins, Oliver L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrochemical Engineering, 1923; Includes bibliographical references (leaves 17-18).
</description>
<pubDate>Mon, 01 Jan 1923 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163103</guid>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An optical instrument for the synthesis of sound</title>
<link>https://hdl.handle.net/1721.1/163102</link>
<description>An optical instrument for the synthesis of sound
Brown, Sherwood Fiske.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1930
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163102</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The chemical and physical constitution of reduced copper-red glazes</title>
<link>https://hdl.handle.net/1721.1/163101</link>
<description>The chemical and physical constitution of reduced copper-red glazes
Brown, Sherwood Fiske.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1961; Includes bibliographical references (leaf 60).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163101</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Pleasant Valley, Nova Scotia, Limestone</title>
<link>https://hdl.handle.net/1721.1/163100</link>
<description>The Pleasant Valley, Nova Scotia, Limestone
Jeffries, James T.; Manlove, Robert F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Geology, 1959; Includes bibliographical references (leaf 63).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163100</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study</title>
<link>https://hdl.handle.net/1721.1/163099</link>
<description>A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study
Goody, Marvin E.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1951; "A thesis submitted in partial fulfillment of the requirements for the degree of Master in Architecture, Massachusetts Institute of Technology, August 22, 1951."; Includes bibliographical references (leaves 93-95).
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163099</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on structuralism and development</title>
<link>https://hdl.handle.net/1721.1/163098</link>
<description>Essays on structuralism and development
Boutros-Ghali, Y.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1981; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163098</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and evaluation of a frequency-shifting hearing aid.</title>
<link>https://hdl.handle.net/1721.1/163097</link>
<description>Design and evaluation of a frequency-shifting hearing aid.
Falkenburg, Douglas Emil.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 103-104.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163097</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neutron scattering study of the magnetism and structural phases of superconducting La₂CuO₄₊y̳</title>
<link>https://hdl.handle.net/1721.1/163096</link>
<description>Neutron scattering study of the magnetism and structural phases of superconducting La₂CuO₄₊y̳
Lee, Young Sang,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2000; In title on t.p., double-underscored "y" appears as subscript.; Includes bibliographical references (p. 195-215).
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163096</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Operational Value Stream Analysis for Developmental Excellence</title>
<link>https://hdl.handle.net/1721.1/163055</link>
<description>An Operational Value Stream Analysis for Developmental Excellence
Shaw, Eric T.
The aerospace and defense industry faces increasing challenges in new product development, where financial constraints and risk aversion hinder innovation. Using a multidisciplinary approach that integrates contract theory, computational fluid dynamics (CFD), and machine learning, this research explores the impacts of engineering requirements, financial alignment among stakeholders, and improved efficiencies in predictive modeling techniques for two separate air vehicle programs: A and B. A Monte Carlo analysis using SEER-H estimation software quantifies the financial and schedule impacts of engineering requirements, revealing a 10–30% cost increase due to volatility in air vehicle development design parameters. Moreover, a game-theoretic contract negotiation simulation illustrates the importance and opportunity of financial incentive alignment among key stakeholders. Additionally, predictive analytics leveraging machine learning models better capture the relevant flow mechanics, improving the circumferential distortion estimations in nacelle aerodynamics by over 10% compared to traditional heuristics. Finally, a CFD-based actuator disk source modeling approach demonstrates a 60% reduction in steady-state distortion at some portions of the flight envelope, due to the impact of the fan upstream influence on inlet flow distortion suggesting increased operational capability for the air vehicle program B. This research provides actionable recommendations to enhance the operational value stream of new air vehicle program development, emphasizing the need for pre-RFP requirements validation, advanced machine learning applications for predictive engineering, and refined CFD modeling to identify technical risks earlier in the design process.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163055</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow</title>
<link>https://hdl.handle.net/1721.1/163054</link>
<description>A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow
Sonandres, Jake T.
In this work, we present a computational framework for modeling the coupled dynamic interactions of highly flexible slender filaments immersed in a viscous flow and their entanglement with themselves and moving structures. This work is motivated by a novel drone countermeasure that entangles propellers with flexible filament clouds, inducing a loss of thrust and control authority in the drone. However, the framework is relevant to a wider range of applications, including actin filaments in cell biology, carbon nanotubes in composite materials, and rope-like structures in industrial settings. Each filament is modeled with the three-dimensional geometrically exact Kirchhoff-Love torsion-free finite element beam formulation. The fluid flow resulting from filament aerodynamic interaction is described through a Boundary Integral (BI) formulation of the incompressible Stokes equations based on the Stokeslet discretization. The heavy computational load of the resulting dense system is addressed through the use of fast GPU-based dense linear solvers. The BI formulation is coupled to the filament solid mechanics by enforcing momentum balance at the dynamically evolving filament-fluid interface. Additionally, the solid contact interactions between filaments are modeled with a point-to-point frictional contact algorithm that applies discrete contact and frictional forces at the closest point between the beam elements. We address the difficulties associated with contact between elements represented with third-order Hermitian polynomial shape functions and the strategies adopted to overcome these challenges. To capture propeller fouling for drone countermeasures, we incorporate a propeller and motor model whose thrust and torque responses are affected by contact interactions during entanglement. We verify our framework against simple analytical solutions and demonstrate its capabilities with numerical examples that attempt to capture large-scale filament entanglement behavior. In particular, we apply our methodology to demonstrate the process by which filament entanglement can restrict motion and reduce the efficacy of propellers. The results show that the framework can be used to understand the connection between filament entanglement, key system properties, and the resulting thrust generated by the propeller.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163054</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach</title>
<link>https://hdl.handle.net/1721.1/163053</link>
<description>Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach
Martin, Estelle Claude Aline
Aviation contributes significantly to global greenhouse gas emissions, driven primarily by its dependency on fossil-based jet fuel. Sustainable Aviation Fuel (SAF) offers a short-term option to mitigate these emissions. However, its current scalability remains limited, constrained by access to sustainable biomass. Realizing SAF’s potential in the near term, using the agricultural and industrial systems already in place requires a detailed understanding of biomass availability, resource competition, and the scalability of SAF production. This thesis presents a comprehensive system analysis framework and a data-driven methodology for evaluating SAF production potential based on current agricultural output, without assuming land expansion or major yield improvements and preserving food utilization. It evaluates the SAF production potential from increasing biomass availability by redirecting biomass currently used for some non-food purposes, and by utilizing processing and agricultural residue. In-depth analysis of four high-potential case studies, one for each main biomass family (starchy, sugary, oily, and fats and greases), was used to construct a detailed model of the supply chain. This structure was then applied globally across all countries and relevant feedstocks to estimate SAF production potential and associated system requirements.&#13;
&#13;
Findings from the case studies show that these four high-potential opportunities could collectively meet only up to 13.1% of global jet fuel demand in 2023, assuming 100% neat SAF. The global analysis estimates that the SAF production potential from the considered streams of increased biomass availability could meet up to about two-thirds of global jet fuel demand, with 28.7% derived from agricultural residues, 25.9% from redirected main products, and 12.5% from processing residues. These contributions hence remain insufficient to fully displace fossil jet fuel. This work provides an estimate of what could be achieved using the existing agricultural and industrial systems, what resource would be required, and how it compares to global resource availability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163053</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal inference for complex systems and applications to turbulent flows</title>
<link>https://hdl.handle.net/1721.1/163052</link>
<description>Causal inference for complex systems and applications to turbulent flows
Sánchez, Álvaro Martínez
Causality lies at the heart of scientific inquiry, serving as the fundamental basis for understanding interactions among variables in physical systems. Despite its central role, current methods for causal inference face significant challenges due to nonlinear dependencies, stochastic interactions, self-causation, collider effects, and influences from exogenous factors, among others. While existing methods can effectively address some of these challenges, no single approach has successfully integrated all these aspects. Here, we address these challenges with SURD: Synergistic-Unique-Redundant Decomposition of causality (Nat. Commun., vol. 15, 2024, p. 9296). SURD quantifies causality as the increments of redundant, unique, and synergistic information gained about future events from past observations. The formulation is non-intrusive and applicable to both computational and experimental investigations, even when samples are scarce. We benchmark SURD in scenarios that pose significant challenges for causal inference and demonstrate that it offers a more reliable quantification of causality compared to previous methods. We further illustrate the applicability of our approach in two turbulent-flow scenarios: the energy transfer across scales in the turbulent energy cascade and the interaction between motions across scales in a turbulent boundary layer. Our results show that, without accounting for redundant and synergistic effects, traditional approaches to causal inference may lead to incomplete or misleading conclusions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163052</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Theoretic Process Analysis of Sociotechnical Systems</title>
<link>https://hdl.handle.net/1721.1/163051</link>
<description>Systems Theoretic Process Analysis of Sociotechnical Systems
Harrington, Polly
The safety and success of complex modern systems, such as hospitals, aircraft, or software, depend on their ability to integrate people and technical components. For example, doctors must be able to use their computerized surgical tools to treat their patients successfully, airplane pilots must be able to operate the required controls for takeoff and landing, and regulators must be able to interpret the data they receive to make critical decisions. However, designing systems that facilitate safe interactions between humans and technology is not a simple task. System designers must consider not only the constraints of the technical components but also human requirements throughout the entire system. However, accidents in modern systems continue to prove that more work is needed to identify and prevent unsafe interactions between humans and technology Systems Theoretic Process Analysis (STPA) is a hazard analysis methodology based on systems theory that has been used to improve system safety in various industries, including healthcare, aviation, nuclear power, and automotive design. However, if hazard analysts using STPA lack significant expertise in human factors engineering (HFE), they may be unable to thoroughly and rigorously identify critical unsafe interactions. This thesis presents a process for utilizing HFE to improve the results of STPA analyses on sociotechnical systems. In particular, the process focuses on the thorough identification of causal scenarios in sociotechnical systems by incorporating relevant human factors concepts. The process allows analysts without significant training in HFE to improve their ability to identify useful scenarios for humans in their system. The effectiveness of the improved process is demonstrated using a healthcare case study on over-the-counter clinical laboratory tests in the United States. By establishing a process for non-HFE experts to use when conducting STPA analyses, more systems can be developed that enhance human performance rather than increase conflict between humans and the engineered system.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163051</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows</title>
<link>https://hdl.handle.net/1721.1/163050</link>
<description>Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows
Lin, Fayleon
A single lightning strike can deliver a steady current of hundreds of amps during its attachment to an aircraft. Therefore, it is imperative to have an adequate lightning protection system in the aircraft to minimize the probability of catastrophic accidents. Current guidelines for lightning protection systems are based on prior service experience and historical data, which might become insufficient with future generation aircraft. These often adopt novel and unconventional aircraft designs, often deviating significantly from current designs. Therefore, efforts are underway to update these guidelines with novel methods such as designs aided by numerical simulation that can accurately model the behavior of lightning attachment and the subsequent swept-stroke phase. To aid in the development of these numerical methods, ample data of not only the electrical arcs but also their interactions with the surrounding flow are necessary for validation. However, most studies on long electrical arcs lack a detailed investigation of the coupling between the electrical arcs and the surrounding flow field. For that purpose, teams from the Massachusetts Institute of Technology (MIT), ONERA, and Universitat Politècnica de Catalunya (UPC) conducted an extensive experimental campaign in April 2024 that investigates this coupling in detail for the first time. Data gathered from this experiment include electrical properties of the arc, high-speed video of the arc column, and the velocity field of the surrounding flow. Approximately 200 cases were conducted with various geometrical and electrical configurations. To meaningfully analyze all the data, a set of algorithms was developed to automatically process, analyze, and visualize these data. Detailed analysis of the root and column behavior was performed; electrical properties were verified to be consistent with literature values; and coupling between the velocities of the arc column and the flow field was determined by simultaneous visualization of both data forms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163050</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions</title>
<link>https://hdl.handle.net/1721.1/163049</link>
<description>Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions
Bahlous-Boldi, Adam A.
As space missions push toward smaller, lighter, and more deployable instrumentation, diffractive optical elements (DOEs) offer a compelling alternative to traditional optics. Their ability to focus light through engineered phase profiles rather than curved surfaces allows for large-aperture, flat optics that are far lighter and easier to package for launch. However, this benefit comes with trade-offs: DOEs are sensitive to wavelength mismatch, manufacturing errors, and environmental deformations—especially thermal gradients and membrane tensioning in space. This thesis develops a comprehensive framework for understanding and simulating the performance of DOEs under realistic operating conditions. Beginning from first principles, the work contrasts geometric and wave-optical models for Fresnel zone plates and multilevel diffractive lenses, leading to quantitative predictions of diffraction efficiency and PSF quality under non-idealities. A key contribution is the analytical and numerical analysis of how uniform thickness errors, wavelength mismatches, and thermal expansions degrade optical performance, both in efficiency and wavefront fidelity. To evaluate these effects in detail, a flexible simulation tool was developed in MATLAB, enabling both Fourier and integral-based propagation through arbitrarily deformed DOEs. These models are applied to a conceptual space-based LIDAR system—SPECIES—that uses a deployable DOE optic to demonstrate the feasibility and limitations of this approach. The results show that DOEs can tolerate some global deformations - for example, a 1 mm deformation results in a 38% performance loss in an F3 LiDAR system with a 1 mm detector diameter. However, they remain highly sensitive to fine-scale shape errors, posing significant challenges for high-precision applications like fiber coupling or imaging. The findings provide new insight into the tolerances, benefits, and trade-offs of DOEbased systems in space, and lay the groundwork for future missions seeking to leverage lightweight diffractive optics for remote sensing and optical communication.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163049</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure</title>
<link>https://hdl.handle.net/1721.1/163048</link>
<description>Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure
Davalos, Daniela L.
Prolonged exposure to reduced gravity environments can lead to significant deconditioning of the cardiovascular, musculoskeletal, and ocular systems. These effects increase the risk of orthostatic intolerance, bone loss, and conditions such as Spaceflight Associated Neuro-ocular Syndrome (SANS). As spaceflight missions grow longer and more frequent, especially with increased extravehicular activity (EVA) on the Moon or Mars, it is critical to develop effective countermeasures and Earth-based analogs to simulate these gravitational environments and evaluate physiological impacts. This thesis addresses these challenges through two complementary approaches. First, it presents the design and development of the MIT Moonwalker IV, a passive mechanical offloading system that simulates partial gravity by applying vertical support via a spring-cable mechanism. In a treadmill-based pilot study, one participant showed at least a 50% reduction in metabolic demand while running under simulated Martian gravity. These findings validate the Moonwalker IV as a metabolic analog for EVA task simulation. Second, this thesis evaluates a collapsible lower body negative pressure (LBNP) suit as a wearable countermeasure for micro and partial gravity environments. By applying negative pressure to the lower body, the suit helps restore the mechanical loading and hydrostatic fluid gradients typically provided by Earth’s gravity. The suit was tested in both simulated reduced gravity via a head-down/head-up tilt paradigm and and true reduced gravity via parabolic flight. Each condition was evaluated both with and without –20 mmHg of LBNP. Results demonstrated that the collapsible LBNP suit produced cardiovascular responses comparable to those observed in traditional rigid LBNP chambers. It also induced lower body fluid shifts as measured by segmental leg bioimpedance, reduced intraocular pressure, and generated ground reaction forces similar to standing in 1G. These findings support the complementary use of Earth-based analog systems to simulate partial gravity and wearable devices to simulate Earth gravity in reduced gravity environments. They offer valuable tools for preparing astronauts and preserving physiological health during long-duration space missions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163048</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unforgettable Generalization in Language Models</title>
<link>https://hdl.handle.net/1721.1/163047</link>
<description>Unforgettable Generalization in Language Models
Zhang, Eric
When language models (LMs) are trained to forget (or “unlearn”) a skill, how precisely does their behavior change? We study the behavior of transformer LMs in which tasks have been forgotten via fine-tuning on randomized labels. Such LMs learn to generate near-random predictions for individual examples in the “training” set used for forgetting. Across tasks, however, LMs exhibit extreme variability in whether LM predictions change on examples outside the training set. In some tasks (like entailment classification), forgetting generalizes robustly, and causes models to produce uninformative predictions on new task instances; in other tasks (like physical commonsense reasoning and scientific question answering) forgetting affects only the training examples, and models continue to perform the “forgotten” task accurately even for examples very similar to those that appeared in the training set. Dataset difficulty is not predictive of whether a behavior can be forgotten; instead, generalization in forgetting is (weakly) predicted by the confidence of LMs’ initial task predictions and the variability of LM representations of training data, with low confidence and low variability both associated with greater generalization. Perhaps most surprisingly, random-label forgetting appears to be somewhat insensitive to the contents of the training set: for example, models trained on science questions with random labels continue to answer other science questions accurately, but begin to produce random labels on entailment classification tasks. Finally, we show that even generalizable forgetting is shallow: linear probes trained on LMs’ representations can still perform tasks reliably after forgetting. Our results highlight the difficulty and unpredictability of performing targeted skill removal from models via fine-tuning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163047</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems</title>
<link>https://hdl.handle.net/1721.1/163045</link>
<description>Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems
Wu, Benjamin
Multiple-Input Multiple-Output (MIMO) wireless communication systems incorporate forward error correction (FEC) to achieve high reliability under fading and interference. In this thesis, we explore the emerging FEC paradigm of Guessing Random Additive Noise Decoding (GRAND) in a point-to-point MIMO system. &#13;
Treating GRAND as an FEC decoder disjoint from the MIMO detector, we compare the soft-decision Ordered Reliability Bits GRAND (ORBGRAND) to CRC-Assisted Successive Cancellation List (CA-SCL) decoding of the CRC-Assisted Polar (CA-Polar) [105, 128] code found in the 5G New Radio standard. For this code, we find that ORBGRAND outperforms CA-SCL (list size 16) by 1 dB E_b/N₀ at block error rate of 10⁻³, under 16-QAM and Linear Minimum Mean Square Error detection, with two transmit antennas and four receive antennas. We also show that ORBGRAND, when paired with other moderate redundancy linear codes, can yield substantial savings in the range of 0.5 − 2 dB in E_b/N₀ over CA-SCL decoding (list size 16) of CA-Polar codes with the same code parameters, for a block error rate of 10⁻³. We provide extensive benchmarks comparing ORBGRAND to CA-SCL and other soft-decision GRAND variants. We also integrate a GRAND decoder producing soft output into a MIMO iterative detection and decoding (IDD) receiver. Specifically, we apply an established technique which utilizes soft-output GRAND as the component decoder for the block turbo decoding of product codes. This block turbo decoder is evaluated as a soft output decoder within a MIMO IDD receiver. We demonstrate competitive or superior performance relative to Belief Propagation (BP) decoding of 5G Low-Density Parity Check (LDPC) codes. This approach also marks a use of GRAND for low-rate, high-redundancy FEC in a MIMO system. With GRAND in MIMO still being an emerging area of research, this work is an exploratory evaluation of GRAND for FEC in MIMO, and highlights GRAND’s potential as a versatile and performant decoder in different MIMO receiver architectures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163045</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing</title>
<link>https://hdl.handle.net/1721.1/163044</link>
<description>Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing
Wu, Jessica L.
The increasing versatility of Large Language Models (LLMs) calls for developing effective routing systems to match tasks with the most suitable models, balancing accuracy and computational cost. This research introduces a novel meta-cascade routing framework that combines meta-routing, where a predictive model selects the appropriate LLM for a task, and cascading, where models are queried in sequence to optimize cost and performance. A critical component of this framework is the companion classifier, defined as a fine-tuned model trained to predict whether a particular LLM will generate an accurate response. We investigate whether incorporating features such as model responses into these classifiers can improve routing accuracy. Our preliminary experiments, using the Routerbench dataset, focus on training companion models that provide more stable and accurate routing decisions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163044</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq</title>
<link>https://hdl.handle.net/1721.1/163042</link>
<description>Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq
Teshome, Christian
Data-intensive applications often involve operations over structured datasets, such as filtering, joining, and projecting records. Relational database systems generally use query planners to optimize high-level SQL queries into efficient execution plans. While these systems apply well-established query transformations, they typically assume the correctness of these transformations rather than formally proving them. The absence of formal guarantees can be a significant limitation for systems with strict correctness requirements. This thesis contributes to Fiat2, a Python-like high-level programming language for data-intensive workloads that integrates formal verification via the Coq proof assistant. We focus on proving the correctness of several rewrite-based query optimizations commonly used in database engines. Specifically, we formalize and prove the correctness of algebraic rewrites involving combinations of filters, joins, and projections, as well as join-reordering rewrites. All rewrites are proven in Coq to preserve the semantics of the original program under list semantics, meaning that the output lists are fully equivalent (or permutations, in the case of join reordering). These verified rewrites serve as a foundation for future optimization in Fiat2, enabling significant optimizations while preserving the semantics of the original queries with correctness guarantees. The results demonstrate the feasibility of integrating formally verified query optimizations into a practical high-level programming language.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163042</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Converting PyTorch Models to StreamIt Pipelines</title>
<link>https://hdl.handle.net/1721.1/163041</link>
<description>Converting PyTorch Models to StreamIt Pipelines
Rajvee, Muhender Raj
With the rise of large language models, there have been efforts to optimize machine learning inference to support a large volume of queries. Currently, the two main ways to do this are running optimized kernels for computing the forward inference pass and distributing computation across multiple GPUs or different cores in a GPU. Machine learning libraries such as PyTorch produce dynamic computation graphs in order to represent the forward pass of the model. PyTorch allows conversion of these dynamic graphs into static ones through just-in-time (JIT) compilation. These graphs can then be optimized further by the compiler. We propose an alternate way of optimizing these dynamic graphs. We convert the dynamic computation graph of PyTorch to pipelines in StreamIt, a domain specific language (DSL) for streaming applications, and use the multi-stage compilation property of BuildIt to compile this pipeline in stages to inference code. We found that, while the inference latencies of models compiled in this way are slightly higher, they are still comparable to those of PyTorch models and are open to future optimizations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163041</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering</title>
<link>https://hdl.handle.net/1721.1/163040</link>
<description>An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering
Ramkumar, Vayd Sai
In an era of information overload, verifying data reliability and provenance is critical, yet knowledge graphs (KGs) often remain complex for non-expert users. This thesis introduces TRACE, a Reasoning and Answer-path Comprehension Engine, a visualization tool enhancing transparency in KG question answering (KGQA). By abstracting intricate KGs into intuitive meta-nodes, TRACE simplifies exploration of large, multi-topic datasets. Its interactive interface allows users to navigate semantic communities and trace reasoning paths, fostering trust through clear answer derivation. Unlike cluttered traditional graph visualizations, TRACE’s meta-node approach provides a scalable, user-friendly solution, concealing technical complexities while enabling robust query validation. Large language models support natural language query parsing and community summarization, making KGs accessible to diverse audiences. TRACE positions itself as a vital widget for information platforms, empowering users to counter misinformation confidently. A user study and pipeline evaluation confirmed TRACE’s intuitive interface excels for complex queries, though multi-hop paths pose challenges, while processing tests demonstrated its scalable paradigm for large datasets. By prioritizing transparency and usability, TRACE redefines KGs as reliable tools for knowledge discovery, laying a foundation for future systems to deliver trustworthy, accessible information in a digital landscape fraught with uncertainty.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163040</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral Analysis of Local Atomic Environments</title>
<link>https://hdl.handle.net/1721.1/163039</link>
<description>Spectral Analysis of Local Atomic Environments
Phung, Tuong
The representation of local environments is a cornerstone challenge in computational materials science, with profound implications for property prediction and materials discovery. This thesis presents a comprehensive investigation of spectral descriptors constructed from spherical harmonic expansions to represent the geometries of local atomic environments. Systematic computational experiments evaluate the robustness of these descriptors to geometric perturbations and their capacity to differentiate structurally similar configurations. The findings reveal a clear performance hierarchy, with higher-order descriptors offering increased geometric expressivity and reconstruction accuracy in resolving challenging structural cases. This research further examines methods for inverting spectral representations back to atomic coordinates, demonstrating that directly optimizing three-dimensional positions through gradient-based techniques yields markedly better reconstruction accuracy than approaches operating in Fourier space. Dimensionality reduction via latent space embeddings is also explored, showing that essential geometric features can be preserved in significantly compressed representations. Through methodical analysis of descriptor limitations, performance boundaries, and sensitivity to hyperparameters, this work establishes practical benchmarks and implementation guidelines for spectral descriptors. These contributions strengthen the foundation for reliable machine learning models in computational materials science, advancing both the accuracy and efficiency of atomic-scale modeling for materials design and discovery.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163039</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Optimization of Shipping Container for Package-Less Units</title>
<link>https://hdl.handle.net/1721.1/163038</link>
<description>Design and Optimization of Shipping Container for Package-Less Units
Minja, Baraka
Package-less shipping aims to deliver units without company X’s added packaging. This requires the fulfillment systems and processes to have gentler handling. Part of this change involves the design and implementation of a container that will carry units from a distribution center to a delivery facility. This thesis presents the container analysis that was completed to determine what the optimal container features and container type are for package-less shipping. &#13;
Collapsible bags provide the best solution for package-less shipping in comparison to nestable and collapsible totes. Since ergonomic weight is the limiting constraint, the lower weight of the collapsible bag will allow for 1 or 2 more units per container. In addition, it benefits from 1) lower process cost for returning to dock (3.7% cost reduction as compared to a nestable tote) 2) better ergonomics (collapsible tote has undesirable pinch points) and 3) improved cycle time (estimated 2s to open/collapse compared to 4s for collapsible tote).&#13;
Additional considerations that require more analysis relate to units per container and relocation.  Based on company X’s past orders and unit types for the package-less shipping process, it is estimated that ~210 units per container (17.08 cu. Ft.) is the max achievable for NA before it reaches the ergonomic weight cap. However, company X is expecting the package-less shipping distribution center process to be constrained to ~105-133 units. Analysis of container relocation from delivery facilities to distribution centers indicates it is worthwhile investigating alternative relocation strategies in lieu of dedicated 53-foot container trailers to achieve lower relocation costs. &#13;
The collapsible bag is the best option assuming it has at least an expected lifetime of 2 years, which is when its NPV exceeds that of the two alternatives. These results are sensitive to assumptions made, and it is necessary to fine tune this analysis when the end-to-end package-less shipping process has been fully mapped out.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163038</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics</title>
<link>https://hdl.handle.net/1721.1/163037</link>
<description>Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics
Nakamura, Haley Marie
Over the last decade, machine learning based facial recognition (FR) systems have continued to increase in popularity while spreading to unique deployment settings. Despite the large variance among FR input distributions, popular facial recognition benchmarks continue to characterize system performance using one aggregate score over a single dataset. In many cases, the limitations of this score are unclear to downstream users: assuming benchmark accuracy is high, how is it expected to change for an image sampled from a distinct distribution? Which transformations can the model handle robustly, and which cause failure? Meanwhile, there is a large body of human facial perception research that aims to understand the underlying mechanisms of human recognition. This field offers methodological inspiration for more informative evaluation techniques, including the characterization of recognition performance as a function of a quantifiable input transformation. This work performs such an analysis. The performance scores of five state-of-the-art FR models are characterized as a function of Gaussian blur strength, intersecting with color variation. The performance-blur relationship is modeled as an s-curve, creating a highly interpretable format for discussion. Blur strength was consistently statistically significant to performance, but color variation did not significantly impact any model. Results are then compared to prior human recognition experiments. The best models outperform humans in low-blur regimes while humans outperform all models in high-blur regimes. These results motivate the need for modern benchmarks that capture a range of input distributions. The analysis presented can lead to a deeper understanding of FR systems, and provide a clearer interpretation of how model performance changes under quantified distribution shifts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163037</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach</title>
<link>https://hdl.handle.net/1721.1/163036</link>
<description>Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach
Magzoub, Amna Ahmed Eltayeb
In highly regulated industries such as medical devices, accelerating New Product Development (NPD) without compromising quality or compliance is a persistent challenge. This thesis investigates the design transfer process, a critical, yet under- examined phase of NPD, as a strategic lever to reduce time-to-market. The project uses swimlane flowcharts and Design Structure Matrices (DSM) to map real-world processes, identify breakpoints, and classify rework (both planned and unplanned) in four case studies from Stryker Corporation. Key patterns emerged across case types: insufficient early-stage validation, misaligned cross-functional communication, and inadequate integration with suppliers were recurrent drivers of inefficiency. Compara- tive analysis revealed that concurrent engineering practices and knowledge sharing significantly reduce unplanned rework cycles and improve development speed. The study proposes actionable recommendations for optimizing design transfer including: leveraging corporate know-how through intentional knowledge transfer meetings dur- ing the process benchmarking process, increased risk-taking during the development process by embracing concurrent engineering approaches, and investing in early-stage co-development by adopting regular collaboration activities with suppliers. These findings can inform broader process improvements in the development of medical devices, and serve as a blueprint for other complex, cross-functional environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163036</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of the Solar Cycle on Satellite Orbital Lifetime</title>
<link>https://hdl.handle.net/1721.1/163035</link>
<description>The Effect of the Solar Cycle on Satellite Orbital Lifetime
Lisy, Celvi A.
The lifetime of a satellite in Low Earth Orbit (LEO) is affected by the 11-year solar cycle. At a fixed altitude, increasing solar activity increases atmospheric density which leads to an increase in drag, and a decrease in mission lifetime without using propulsion to recover altitude. Satellites may have longer orbital lifetimes if more of their mission is operational during a solar minimum due to lower solar activity and lower atmospheric drag. Satellites with larger area-to-mass ratios generally have shorter orbital lifetimes than satellites with small area-to-mass ratios. Missions that get delayed and have more of their operations during solar maximum than planned originally may have too short of a mission lifetime or, conversely, may be at risk of increasing their orbital lifetime past regulatory limits (five years for satellites in LEO according to the FCC) if they launch closer to solar minimum. For example, a satellite with an area-to-mass ratio of 0.014 m2/kg – such as a 1U CubeSat – and a one-year mission that is launched in 2021 without onboard propulsion, would have an orbital lifetime of 1.051 years. However, if that mission were delayed a year, a common occurrence in the industry, it would no longer be able to achieve its mission as its orbital lifetime with a deployment in 2022 is 0.44 years. Conversely, if the same 1U CubeSat is launched during solar max in January 2025, it would have an orbital lifetime of 2.2 years, and would re-enter in February of 2027. However, if that mission were delayed a year, the satellite would launch in January 2026 and instead be in orbit for 6.4 years before re-entering. They could be fined for violating the FCC deorbit limit of five years. This thesis quantifies the effect of launch or processing delays on satellite orbital lifetime based on their orbit altitude and vehicle parameters such as mass, cross sectional area, altitude, and bus size. In general, it is found that four-year and six-year delays have the greatest effect on a satellite’s orbital lifetime because the satellite will be deorbiting almost half a solar cycle (5.5 years) from its intended deployment year. However, two-year delays can still affect satellite operators, as they can increase the orbital lifetime, even by up to 1.5 years for low area-to-mass ratio satellites in 400 km orbits and almost five years for satellites in orbits higher than 500 km. Two-year delays can also decrease the orbital lifetime of a satellite by up to 1.7 years for low area-to-mass ratio satellites in 400 km orbits and almost two years at altitudes higher than 500 km.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163035</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors</title>
<link>https://hdl.handle.net/1721.1/163034</link>
<description>Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors
Rao, Sankarsh R.
This thesis provides an introduction to transmission line theory (telegrapher’s equations) as the mathematical background needed to correctly perform and interpret electrical measurements in nanosecond pulsed discharge reactors. The mathematical framework is implemented in a numerical tool called VI-View, which is made available to the community to aid with the interpretation of electrical measurements and help explain discrepancies between different experimental arrangements and probe configurations. A brief manual on how to use the tool is provided, followed by a series of six case studies relevant to experimental setups/situations encountered in practice. The analysis of these case studies summarizes best practices when performing electrical and energy measurements in nanosecond pulsed discharge reactors. Case Studies 1 and 2 cover in-situ and remote measurements for reactors using one voltage and one current probe. Case Study 3 covers how two current probes, one on the high-voltage end and one on the low-voltage end, can achieve the same energy measurements as Case Studies 1 and 2. Case Studies 4 and 5 show how cables with varying lengths and dissimilar properties — as can sometimes be encountered in practice — affect the electrical signals. Case Study 6 shows how a variable resistance — a step drop from 50MΩ to 10Ω — within a load can be a first approximation to a plasma reactor with a discharge. Finally, an outlook on how these case studies connect to real, experimental waveforms is presented along with the limitations of the tool.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163034</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies</title>
<link>https://hdl.handle.net/1721.1/163033</link>
<description>Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies
Reider, Sarah
Nitrogen Oxides (NOₓ) from aviation emissions are well known to have detrimental effects on air quality and the climate. Presently, they are regulated to preserve local air quality around airports. As part of the regulation process, aircraft engines are placed on a test stand with NOₓ levels measured at different thrust settings meant to mimic the aircraft’s emissions during landing and take-off. These are then constrained as a function of the engine’s overall pressure ratio (OPR) and rated thrust, with the allowed NOₓ emissions increasing with OPR. Despite increases in the stringency of this regulation, recent research suggests this regulation is insufficient for protecting surface air quality degradation from NOₓ emissions at cruise. Moreover, at high OPRs, NOₓ emissions increase substantially for relatively small reductions in fuel burn. In light of this, a new metric representative of cruise emissions is being investigated. This work considers effective methods to define this new regulation given a wide range of uncertainties in the tradeoff between NOₓ and CO₂ emissions at high OPRs. First, an estimate for the combined climate and air quality cost of NOₓ from aviation cruise emissions is estimated as ∼$95,000/tonne using a 2019 flight inventory. Then, cruise limits are proposed informed by the combined impact of NOₓ and CO₂ at cruise and with a similar slope to the current LTO standard. Finally, a Monte Carlo simulation is run, sampling NOₓ and CO₂ social costs for a series of hypothetical aircraft designed using the open-source Transportation Aircraft System OPTimization (TASOPT) model. This work takes a worst-case scenario approach, where the only response engine manufacturers can make to stricter standards is to reduce OPR and sacrifice fuel efficiency. Each aircraft’s emissions are evaluated during cruise to determine the probability of increasing environmental harm under different policy scenarios given these uncertainties. The combined cost of NOₓ and CO₂ are compared to the baseline engines that meet current regulations for each scenario. Results show defining a cruise metric informed by the weighted combined cost of CO₂ and NOₓ could reduce total environmental cost at cruise by 15 – 43% while carrying a 6 – 7.4% risk of increasing total environmental cost for wide-body aircraft engines in the most stringent scenario. Less stringent scenarios showed similar risks of increasing harm for smaller potential environmental savings. In all cases, the risks associated with the proposed limits are driven by low-likelihood extremes in the uncertainty distributions of NOₓ and CO₂, further suggesting the benefit of an environmentally conscious standard.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163033</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration</title>
<link>https://hdl.handle.net/1721.1/163032</link>
<description>Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration
Gomez, Annabel Reyna
To collaborate safely and intelligently with humans, robots must infer high-level semantic sates, such as intentions or interaction modes, from uncertain sensor input. While dynamic, probabilistic mode estimation is commonly used in fault diagnosis, this thesis extends the problem to activity recognition, where the goal is to estimate qualitative, symbolic human-object interaction states in real time. Robust human activity recognition is essential for collaborative and assistive robotics, particularly in dynamic or safety-critical environments. The core solution presented in this thesis is a mode-estimator and its efficient implementation using the A* with bounding conflicts (A*BC) algorithm. This performs best-first enumeration over symbolic activity states while integrating recursive Bayesian filtering to maintain belief under noisy observations. Unlike low-level trajectory tracking or deep-learned classifiers, qualitative spatial filtering operates at the right level of abstraction to recognize symbolic actions. It can also generalize across domains with minimal retraining and support efficient, probabilistically grounded reasoning about uncertainty in both perception and symbolic mode transitions. The proposed system fuses RGB-D perception, object segmentation, qualitative spatial reasoning (QSR), and probabilistic inference into a real-time pipeline capable of tracking and inferring symbolic human-object interaction states. Evaluated in a human-robot rehabilitation setting, this domain-independent system successfully infers latent human and object activity states from noise RGB-D data. It resolves ambiguity using Vision-Language Model (VLM)-guided semantic arbitration and demonstrates robustness and adaptability in unstructured environments. This work establishes qualitative spatial filtering with A*BC as a generalizable and efficient solution for semantic activity recognition, laying the foundation for future perception-driven collaborative systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163032</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems</title>
<link>https://hdl.handle.net/1721.1/163031</link>
<description>Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems
Gada, Hiya Akhil
The increasing penetration of renewable and inverter-based resources is transforming modern power systems into fast, nonlinear, and heterogeneous networks. These converterdominated systems operate on timescales much faster than traditional synchronous machines, making conventional modeling and control approaches, rooted in quasi-static phasor analysis and centralized architectures, inadequate for ensuring stability and scalability. This thesis adopts an energy space modeling approach grounded in first principles of energy conservation and system interconnection. It extends the previously introduced second-order energy dynamics model by relaxing the assumption that energy in tangent space can be treated as an independent disturbance. The resulting contribution is a third-order model that treats stored energy in tangent space as a dynamic state, enabling more expressive and accurate modeling of fast-timescale system behavior. Leveraging this extended energy space model, the thesis develops a multilayered distributed control architecture in which the nonlinear physical dynamics of each component are lifted to the higher-level linear energy space, capturing internal energy dynamics and real/reactive power flows, and integrated with the lower-level physical dynamics with well-defined mappings. Distributed controllers are designed in this energy space using only local states and minimal neighbor interaction, assuming a system-level coordination mechanism provides consistent references. Two control strategies, energy-based feedback linearizing control and sliding mode control, are developed and shown to achieve asymptotic convergence to reference outputs. The framework is validated on two systems: an inverter-controlled RLC circuit and a synchronous generator under load. Finally, the energy space framework is extended to structurally model inter-area oscillations (IAOs). An inter-area variable is defined as the difference between power incident on a tie-line from Area I and power reflected into tie-line from Area II. Simulations on a 3-bus, 2-area system confirm consistency with eigenmode analysis and show how tie-line strength and generator inertia affect IAO dynamics. A novel resonance phenomenon is also identified: instability arising from interaction between a system’s natural IAO frequency and time-varying disturbances from intermittent DERs. This previously unmodeled behavior is captured explicitly within the energy dynamics framework and may help explain recent blackout events in the Iberian Peninsula.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163031</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pushing the Limits of Active Data Selection with Gradient Matching</title>
<link>https://hdl.handle.net/1721.1/163030</link>
<description>Pushing the Limits of Active Data Selection with Gradient Matching
Zhang, Chris
As modern machine learning systems grow in scale, the inefficiencies of training on large, noisy, and imbalanced datasets have become increasingly pronounced—particularly in computer vision, where real-world data often contain labeling errors, occlusions, and redundancy. While large models can partially compensate by training exhaustively on massive datasets, this indiscriminate approach is computationally expensive and often inefficient. Active data selection offers a more efficient alternative by prioritizing examples that contribute most to model improvement. However, existing selection strategies (such as Rho Loss) still fall short of the optimal achievable performance. In this work, we propose the Gradient Informed Selection Technique (GIST), an active data selection method that prioritizes examples based on their gradient alignment with a small, fixed holdout set. At each training step, GIST computes perexample gradients and selects those that are most aligned with the holdout gradient, thereby guiding model updates toward better generalization. We evaluate GIST on noisy (Clothing1M) and clean (ImageNet) datasets and show that it consistently outperforms baselines across a range of selection ratios—that is, the proportion of a batch of data that the model selects to update weights on. To address the computational overhead of gradient-based selection, we introduce efficient variants using restricted-layer gradients, low-rank approximations, and gradient quantization. We also analyze GIST’s selection behavior, showing that it implicitly balances classes and repeatedly selects high-utility examples—two factors that enhance both robustness and learning efficiency. Our findings suggest that a more effective data curriculum is both discoverable and practical, and that GIST is a step toward achieving it.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163030</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data</title>
<link>https://hdl.handle.net/1721.1/163029</link>
<description>The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data
Yao, Andrew
The weighted projection of a hypergraph is the weighted undirected graph with the same vertex set and edge weight equal to the number of hyperedges that contain the edge; the projection is the unweighted graph with the same vertex set and edge set consisting of edges with weight at least one. For d ≥ 3, after observing the unweighted and weighted projection of a random d-uniform hypergraph that is sampled using a generalization of the Erdős–Rényi random model, we study the recovery of a fraction of the hyperedges and the entire hypergraph. For both cases, we show that there is a sharp phase transition in the feasibility of recovery based on the density of the hypergraph, with recovery possible only when the hypergraph is sufficiently sparse. Particularly, we resolve numerous conjectures from [5]. Furthermore, we display an efficient algorithm that is optimal for both exact and partial recovery. We also analyze the phase transition for exact recovery by exhibiting a regime of probabilities that is below the exact recovery threshold by a polylogarithmic factor for which exact recovery is possible.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163029</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor</title>
<link>https://hdl.handle.net/1721.1/163028</link>
<description>Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor
Yuan, Joyce
As digital tools become more accessible, creating software is becoming a powerful way for anyone to make real-world impact. Computational action—the idea that learners can build computing artifacts with authentic relevance to their lives and communities—reframes computing as a tool for empowerment. Low-code platforms like MIT App Inventor support this vision by fostering digital agency through purposeful creation. Recent advances in large language models (LLMs) expand these possibilities further by enabling code generation from natural language, offering a timely opportunity to lower the barrier to app creation. MIT App Inventor has long championed accessibility, allowing even young learners in underserved regions to build meaningful mobile apps. Its natural language tool, Aptly, enables users to describe app ideas and generate functional code. However, Aptly’s reliance on cloud-based LLMs limits access for users without stable internet—often those who could benefit most. This thesis addresses that challenge by enabling AI-powered app creation to run entirely offline on mobile devices. We fine-tune and quantize LLaMA 3B using QLoRA and deploy it on iOS with MLC LLM, enabling on-device inference without internet. We also introduce a custom evaluation framework tailored to Aptly’s grammar, combining a Tree-sitter parser and a modified CodeBLEU metric to assess both semantic and syntactic quality. Using curated evaluation datasets, we benchmark out-of-box and fine-tuned models across prompting strategies. In our evaluations, fine-tuned GPT-4.1 achieved the highest normalized CodeBLEU score (0.36 ± 0.12) and parsed over 81% of completions, outperforming its baseline by more than 5%. QLoRA-finetuned LLaMA improved parseability by 11.7% over its base model, showing progress in adapting smaller models to the Aptly domain, though semantic fidelity remains a challenge. Our results show that offline natural language–to–app generation is feasible, and that smaller models can be adapted to the Aptly domain. By lowering the technical and infrastructural barriers to app creation, this work lays the foundation to empower AI-assisted programming that is accessible, offline, and on the phone.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163028</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison</title>
<link>https://hdl.handle.net/1721.1/163027</link>
<description>AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison
Woo, Andrew Kyoungwan
Post-training adaptations such as supervised fine-tuning, quantization, and reinforcement learning can cause large language models (LLMs) with identical architectures to exhibit divergent behaviors. However, the mechanisms driving these behavioral shifts remain largely opaque, limiting the reliability and interpretability of adapted models. AutoDiff is a scalable, automated framework for tracing model divergence on a per-neuron basis. It exhaustively profiles every feed-forward (MLP) unit across a pair of models, identifies the neurons with the largest activation gaps, and links these differences to downstream behavioral changes. The pipeline identifies exemplars that maximize between-model activation divergence and clusters the highest-gap neurons into an interpretable, queryable difference report. Proof-ofconcept experiments on GPT-2 small validate AutoDiff’s ability to rediscover synthetic perturbations without manual supervision. A larger case study on Llama3.1–8B contrasts the base model with several adapted variants, surfacing neurons whose behavioral shifts align with observed topic-level gains and losses. By uncovering these mechanistic divergences, AutoDiff transforms black-box model updates into actionable insights, enabling safer deployment, principled debugging, and interpretable model evaluation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163027</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain</title>
<link>https://hdl.handle.net/1721.1/163026</link>
<description>Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain
Xia, Julia
Rapidly improving generative artificial intelligence has led to significant investments in datacenter infrastructure, driving power demand, and raising environmental concerns. This has led to a growing body of research towards modeling embodied and operational carbon of datacenter servers across a variety of paradigms. However, most existing models take in deterministic inputs and output a singular average value that does not capture the inherent variability in estimating embodied and operational carbon emissions. Further, these average outputs obscure the impact of interacting factors, such as those related to deployment or software characteristics; each of which has its own underlying uncertainty distribution. This means in most cases, these averages do not accurately represent a particular server’s context. This thesis explicitly parameterizes and quantifies the full probabilistic distribution of operational carbon in AI inference tasks. It explores several factors of variability— deployment, spatiotemporal, and computational profile— and quantifies their impact on the overall carbon footprint through statistical and sensitivity analysis. While this work focuses on operational carbon, uncertainty propagation and understanding of variability should be used across a datacenter server’s entire life cycle. When this methodology is used alongside the existing uncertainty-aware embodied carbon measurements, it enables a holistic assessment from cradle to grave. This facilitates informed decision-making in server replacement, workload scheduling, hardware procurement, capacity planning, and more scenarios.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163026</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization</title>
<link>https://hdl.handle.net/1721.1/163025</link>
<description>Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization
Wen, Haoran
Current AI-assisted ideation systems, often based on linear chat interfaces, struggle to help users effectively manage the complexity of creative exploration, hindering both divergent thinking across multiple paths and the convergent synthesis of ideas. This thesis introduces and evaluates Ideator Explorer, a human-AI ideation system built upon an interactive graph visualization interface designed to overcome these limitations. The core of the system is its spatial, tree-like representation of branching idea sequences. Formative user studies indicate that this visualization approach is preferred over chat interfaces for its organizational benefits and its effectiveness in helping users track parallel lines of thought during exploration. The spatial layout inherently supports both the exploration of diverse idea branches (divergence) and the identification of potential connections (convergence). This research focuses on the design and evaluation of this interactive graph interface, examining how its specific visualization and interaction techniques impact the user’s ability to navigate, organize, and develop ideas within complex ideation processes. The primary contribution is a novel, visually driven interface paradigm for human-AI collaboration that enhances the management and exploration of the creative solution space.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163025</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Type Checker for Annotated Assembly Programs</title>
<link>https://hdl.handle.net/1721.1/163024</link>
<description>Type Checker for Annotated Assembly Programs
Zanders, Julian
The rise of speculative-execution attacks, such as Spectre, has presented a security challenge to developers. Speculation on secret data can expose it, but running without speculation is suboptimal for runtime. To fix this, researchers have been evaluating “smart” speculation schemes, which determine when to speculate and when not to in order to balance runtime with security. Our lab proposes Octal, a solution that utilizes software and hardware in tandem. Data values are marked as secret or public using type inference, and the veracity of inference is checked using a type checker. Then, hardware can separate the secret and public values. My contributions were to the type checker, as well as some scripting to evaluate results.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163024</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR</title>
<link>https://hdl.handle.net/1721.1/163023</link>
<description>Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR
Tsao, Nicholas
Robust real-time imaging systems have allowed for many advances in robotics and autonomous navigation, though limited visibility in many real-world settings remains a significant challenge. Non-Line-of-Sight (NLOS) sensing allows for imaging systems to “see around corners", expanding their range of perception, providing access information for realtime decision-making. A promising approach to NLOS sensing is through single-photon LiDAR, which is commonly used for range-finding in many imaging systems. In addition to range-finding, single-photon LiDAR systems can provide a deeply rich data source in the form of photon count histograms after reflecting off scene geometry, capturing detailed information from multiple bounces. NLOS imaging can be achieved by parsing third-bounce light from such single-photon LiDAR sensors, which can be used for a variety of detection and localization tasks, and recent work has demonstrated capabilities in a wide range of applications. This work aims to further develop the NLOS imaging system by demonstrating a fully functional NLOS system using low-cost, consumer-grade SPAD hardware for real-time NLOS imaging, detection, and localization. We lay the ground work for NLOS imaging systems by developing infrastructure for NLOS processing in real-time, and we examine the potential for NLOS systems to operate on cheap hardware using data-driven approaches. Our work implements and demonstrates full end-to-end capacity for these NLOS imaging systems in a number of applications including person detection and localization, facilitating future research in this field and paving the way for NLOS integration into consumer devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163023</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Finetuning via Sparse Autoencoders</title>
<link>https://hdl.handle.net/1721.1/163022</link>
<description>Automated Finetuning via Sparse Autoencoders
Sivakumar, Ragulan
Currently, the field of interpretability is traditionally confined to diagnostics. However, this thesis presents a novel method using interpretability in sparse autoencoders to achieve better performance in small models via instruction finetuning. Specifically, we present UnderstandTune, an autonomous method for assembling high-quality instruction finetuning datasets with minimal human intervention, requiring only concise task descriptions rather than evaluation dataset distributions. Our empirical evaluations show that UnderstandTune consistently outperforms uninformed finetuning baselines across multiple benchmarks. Complementing this, Lalon introduces a mixture-of-informed-experts (MoIE) architecture that routes queries to specialized models independently finetuned via UnderstandTune. This modular approach achieves competitive performance against larger monolithic models in specialized domains, while utilizing fewer parameters, training examples, and computational resources. The framework’s modularity enables independent optimization of components from sparse autoencoders to MoIE routing mechanisms. This research demonstrates how interpretability can be used to enhance performance through intelligent data curation and suggests a new paradigm where interpretability and efficiency reinforce each other toward more capable, resource-efficient AI systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163022</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design</title>
<link>https://hdl.handle.net/1721.1/163021</link>
<description>Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design
Rubin, Dana
Ribonucleic acid (RNA) is a fundamental molecule in biology, central to the regulation and execution of life’s most essential processes. Its diverse roles range from encoding genetic information to catalyzing biochemical reactions. Beyond its modern biological functions, RNA is also believed to have played a pivotal role in the origins of life which underscores the evolutionary significance of RNA. Unlocking the full potential of RNA research and design requires a deep understanding of the intricate relationship between RNA’s three-dimensional structure and sequence. Predicting RNA 3D structures remains a challenging problem due to the complexity of its folding landscape and the limited availability of high-resolution structural data. Inspired by recent advances in deep learning for protein folding and design, this thesis explores novel geometric and generative architectures for modeling RNA. We first present a systematic study on RNA structure prediction using equivariant neural networks within diffusion probabilistic models (DDPMs). Our folding model, named Klotho, captures local atomic interactions and structural features using SO(3)-equivariant message passing layers with a point cloud data representation. Ablation studies confirm that Klotho’s model performance scales with higher dimensionality and improves with enriching the input with secondary structure information and sequence embeddings from RNA foundation models. Building on this foundation, we introduce RiboGen, a multi modal deep learning model to jointly generate both RNA sequence and all-atom 3D structure. RiboGen integrates Flow Matching and Discrete Flow Matching within a unified multi modal representation and employs Euclidean Equivariant Neural Networks to learn geometric features. Our results demonstrate that RiboGen can generate chemically plausible, self-consistent RNA molecules, highlighting the potential of co-generative models to explore the sequence–structure landscape of RNA in a unified, data-driven framework. Together, these contributions advance the field of RNA modeling by offering scalable, symmetry-aware architectures for prediction and design. They lay the groundwork for future generative systems in RNA biology, therapeutic development, and biotechnological innovations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163021</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Object Perception for Robotics</title>
<link>https://hdl.handle.net/1721.1/163020</link>
<description>Resilient Object Perception for Robotics
Shi, Jingnan
A broad array of applications, ranging from search and rescue to self-driving vehicles, requires robots to perceive and understand the geometry of objects in the environment. Object perception needs to reliably work in a variety of scenarios and preserve a desired level of performance in the face of outliers and shifts from the training domain. Obtaining such a level of performance requires robust estimation algorithms that are able to identify and reject outliers, as well as techniques to continually improve performance of learningbased perception modules during test-time. In this thesis, we address these challenges by proposing (1) certifiably optimal solvers and a graph-theoretic framework that together help achieve state-of-the-art pose estimation performance even under high outlier rates, (2) self-supervised object pose estimators that can improve performance during test-time with accuracy comparable to state-of-the-art supervised methods, and (3) a test-time adaptation method for both object shape reconstruction and pose estimation without the need for CAD models. Throughout the thesis, we demonstrate that by using a variety of tools from optimization and learning, we can develop resilient object perception systems that perform reliably in a wide range of conditions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163020</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines</title>
<link>https://hdl.handle.net/1721.1/163019</link>
<description>Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines
Pan, Raymond
Predictive maintenance of wind turbines is a machine learning task aimed at minimizing repair costs and improving efficiency in the wind turbine and renewable energy industry. Existing machine learning solutions often fail to meet real-world deployment requirements due to fragmented pipelines, lack of domain integration, and reliance on black-box models. Zephyr, a data-centric machine learning framework, addresses these challenges by enabling Subject Matter Experts (SMEs) to incorporate their domain knowledge into the prediction process, and to leverage automated tools for labeling, feature engineering, and prediction tasks without requiring extensive technical knowledge. However, the current version of Zephyr still has limitations, including usability gaps and a reliance on external tools for certain steps. Case studies with real-world data from the renewable energy company Iberdrola demonstrate Zephyr’s potential to integrate domain expertise into wind turbine predictive maintenance (thus streamlining the process) but also expose a sub-optimal user experience. This thesis explores gaps in the current state of the Zephyr framework and proposes refinements to enhance its usability. Key improvements include the consolidation of current tooling and relevant external libraries into a single API, state management with careful logging and exception handling, and improved support for model evaluation. These enhancements aim to support seamless end-to-end predictive modeling workflows, and to provide a more refined and flexible user experience for the Zephyr user base.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163019</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features</title>
<link>https://hdl.handle.net/1721.1/163018</link>
<description>Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features
Mishra, Kartikesh
Recent vision-language navigation (VLN) approaches leverage large models, prompt engineering, and/or explicit reasoning for instruction interpretation and agent guidance. We introduce MiniNav, a minimalist framework employing frozen vision-language foundation models as patch-wise feature extractors, avoiding data and compute heavy fine-tuning and cumbersome language model reasoning. Our lightweight control policies (∼ 10⁵ trainable parameters) are trained on a compact dataset of language-based specified navigational behaviors (∼ 10² runs, ∼ 10⁴ frames per behavior). We demonstrate generalization to novel objects and scenes, including direct real-world transfer, despite training on only two objects in a single simulated environment. Through its simple and scalable design, MiniNav provides an alternative to computationally intensive pipelines for robust real-world instruction-following. Our solution can provide a reference for evaluating the effective edge of more complex and larger VLN policies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163018</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models</title>
<link>https://hdl.handle.net/1721.1/163017</link>
<description>Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models
Mitchell, Samuel
Financial fraud detection is a high-stakes field where rapid inference is essential. While state-of-the-art fraud detection models vary in terms of architectural decisions and appear to exhibit unique computational bottlenecks, we highlight that their run-times are all dominated by extensive information-gathering steps. These steps involve aggregating information from a large set of nodes or edges within a graph, and these intensive steps are performed O(|V |) or O(|E|) times during an inference forward pass, on a graph with |V | nodes and |E| edges. We introduce Strategic Sampling, a general method to accelerate these information-gathering steps. Our approach tailors sampling strategies based on the specific objective function used in each model’s information-gathering process, selecting the most relevant pieces of information to use in each step. This ensures that critical information is retained while significantly reducing the amount of data processed, thus speeding up the computation. We conceptually demonstrate how Strategic Sampling can be applied to message-passing Graph Neural Networks, Graph Transformers, and TGEditor (a state-of-the-art graph editing algorithm). To showcase the effectiveness of our proposed Strategic Sampling method, we implement it in the TGEditor codebase. Our results show that Strategic Sampling not only significantly reduces computation time by more than an order of magnitude, but also improves the F1 score, enhancing both efficiency and performance. This study underscores the potential of Strategic Sampling to universally boost the performance of various financial fraud detection models, paving the way for faster and more accurate fraud detection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163017</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grain Boundary Solute Segregation in Vanadium</title>
<link>https://hdl.handle.net/1721.1/163016</link>
<description>Grain Boundary Solute Segregation in Vanadium
Ng, Daniel S.
Vanadium alloys are a candidate structural material in nuclear fusion applications, where the presence of grain boundaries can improve mechanical properties and act as a sink for radiation- induced defects. Solutes with a thermodynamic preference to segregate to grain boundaries can stabilize them, making this a prime consideration for alloy design, but there are limited quantitative solute segregation data for vanadium. Based on results from an ab-initio computational framework for predicting the spectrum of grain boundary segregation energies across the periodic table, select nanocrystalline vanadium-based binary alloy systems were synthesized via mechanical alloying for targeted experiments characterizing differences in segregation strength. Scanning transmission electron microscopy and energy-dispersive x-ray spectroscopy measurements of solute concentrations in the grain boundary and bulk validate computational predictions of the average segregation strengths for different solutes, while showing inhomogeneous solute distributions along the grain boundary network that confirm the necessity for a spectral model that captures the behavior of site-specific segregation energies.&#13;
&#13;
After establishing the segregation behavior of different solutes in vanadium, the effects of solute segregation on other properties are examined. Heating experiments demonstrate that vanadium alloys containing strongly segregating species retain smaller grain sizes upon thermal annealing, indicating better grain boundary stability. The powder metallurgical route used produce these vanadium alloys requires a subsequent sintering step to densify powders into bulk parts for engineering applications, and dilatometry experiments reveal that that the addition of strongly segregating solutes also dramatically suppresses the sintering behavior. A kinetic analysis of the dilatometry data suggests that rapid grain boundary diffusion pathways that are necessary for effective sintering are obstructed by solute segregant, which has important repercussions for the processability of these alloys. Finally, microstructural characterization and nanohardness testing after ion-irradiation experiments demonstrate that the alloys with solute-stabilized grain boundaries are more resistant to nanovoid formation and radiation hardening. The work in this thesis advances our understanding of solute segregation and its effects in vanadium alloys, and highlights an approach for controlling grain boundaries that may facilitate future alloy design efforts for improved microstructural stability and radiation damage tolerance.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163016</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Construction of Complex-Architected Bottlebrush Block Copolymers and Their Self-Assembly Behaviors</title>
<link>https://hdl.handle.net/1721.1/163015</link>
<description>Modular Construction of Complex-Architected Bottlebrush Block Copolymers and Their Self-Assembly Behaviors
Sun, Zehao
Microphase-separated block copolymers are attractive materials for self-assembled nanolithography, yet there is a disconnect between the simple patterns commonly formed by block copolymers and the complex patterns required for many nanoscale applications, particularly in microelectronics. To meet this challenge, researchers have sought to design and build copolymer systems at ever-increasing levels of complexity in the (macro)molecular level, which promises to show emergent intriguing properties that are otherwise absent. However, the synthetic challenge as well as the vastly increased parameter space have obscured the systematic study of such complex systems. An efficient, modular synthetic route is thus highly desired for Lego-like molecular construction of property-decoupled, individually-tunable target materials.&#13;
&#13;
In this thesis, we will highlight the research endeavor in developing a multiblock Janus bottlebrush copolymer architecture as a novel platform for generation of diverse nanostructures that have been challenging to fabricate. The architecture, which features two orthogonal Janus domains, can be facilely constructed from corresponding building blocks by graft-through synthesis and can yield hierarchically engineerable phase-in-phase patterns.&#13;
&#13;
Surprisingly, the two constituent domains, though relatively independent of each other, behave significantly differently when combined together under certain circumstances. Their collective behavior gives rise to two low-symmetry mesh-like network phases (monoclinic and tetragonal respectively) that have not been observed in other soft materials before, which are of both fundamental and technological interest. Through a suite of experimental and computational study, we show that this peculiar phenomenon is an outcome of intrinsic molecular confinement, an emergent effect unique to multi-body, multi-hierarchy complex architectures. This work demonstrates that intrinsic molecular confinement is a viable path to bottom-up assembly of new geometrical phases of soft matter, extending the capabilities of block copolymer nanofabrication.&#13;
&#13;
As another example of modular synthesis, we will show an iterative polymerization methodology for controlled synthesis of bottlebrush copolymers with expanded compositional and architectural scope. When synergizing with other components, this strategy allows rapid access to functional materials that display different phase behavior when compared to the self-assembly of conventional copolymers.&#13;
&#13;
Our work introduced here is expected to facilitate the synthesis of complex functional copolymers, spark interest in the exploration of their property-function relationship, and enable more opportunities for their application in nanopatterning and other advanced materials.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163015</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization Techniques for Trustworthy 3D Object Understanding</title>
<link>https://hdl.handle.net/1721.1/163014</link>
<description>Optimization Techniques for Trustworthy 3D Object Understanding
Shaikewitz, Lorenzo Franceschini
Autonomous machines require reliable 3D object understanding to interpret and interact with their environment. In this thesis, we consider two tightly coupled 3D object understanding problems. Shape estimation seeks a consistent 3D model of an object given sensor data and some set of priors. Pose estimation seeks an estimate of the object’s position and orientation relative to an invariant shape frame. In general, these problems are non-convex and thus difficult to solve. We present algorithms which nonetheless solve shape and pose estimation efficiently and with assurances in terms of of optimality, uncertainty, or latency. We begin in the multi-frame tracking setting, where we propose the certifiably optimal estimator CAST⋆ for simultaneous shape estimation and object tracking. CAST⋆ uses 3D keypoint measurements extracted from an RGB-D image sequence and phrases the estimation as fixed-lag smoothing. Temporal constraints enforce rigidity and continuous motion. Despite the non-convexity of this problem, we solve it to certifiable optimality using a smallsize semidefinite relaxation. We also present a compatibility-based outlier rejection scheme to handle outliers, and evaluate the proposed approach on synthetic and real data. Next, we focus on estimating the pose of an object given its shape and a single RGB image (no depth). Assuming only bounded noise on 2D keypoint measurements (e.g., from conformal prediction), we derive an estimator for the most likely object pose which uses a semidefinite relaxation to initialize a local solver. We pair this with an efficient uncertainty estimation routine which relies on a generalization of the S-Lemma to propagate keypoint uncertainty to high-probability translation and rotation bounds. The high-probability bounds hold regardless of the accuracy of the pose estimate, and are reasonably tight when tested on the LineMOD-Occluded dataset. Lastly, we propose a sub-millisecond solution to simultaneous estimation of object shape and pose from a single RGB-D image. Our approach converts the first-order optimality conditions of the non-convex optimization problem to a nonlinear eigenproblem in the quaternion representation of orientation. We use self-consistent field iteration to efficiently arrive at a local stationary point, finding solutions more than an order of magnitude faster than Gauss-Newton or on-manifold local solvers on synthetically generated data.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163014</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks</title>
<link>https://hdl.handle.net/1721.1/163013</link>
<description>Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks
Morrison, James C.
Next-generation (xG) wireless networks require accurate localization and synchronization for&#13;
efficient resource management and emerging applications. Non-terrestrial networks (NTN)&#13;
with low Earth orbit (LEO) satellites offer a promising alternative for positioning, navigation, and timing (PNT) by providing diversity and increasing the signal-to-noise ratio (SNR)&#13;
over global navigation satellite systems (GNSS). However, the primary challenge in NTNbased localization with LEO satellites is the lack of precise clock synchronization, which&#13;
introduces biases in time-of-arrival (TOA) measurements and limits localization accuracy.&#13;
This paper introduces a joint cooperative localization and synchronization (JCLS) framework that addresses this challenge through spatiotemporal cooperation, soft information,&#13;
and simultaneous synchronization. Furthermore, we propose a three-step algorithm for performing JCLS. The first step calculates a coarse position estimate using TOA measurements&#13;
and the Gauss-Newton method. Then, this coarse estimate is updated using the LevenbergMarquardt method which performs joint localization and synchronization. Finally, we derive a soft information-based filter that is used to continuously refine the position and clock error estimates as new measurements are available. We characterize the fundamental performance limits of JCLS using Fisher information, which offers insight into its localization and synchronization accuracy bounds. Furthermore, simulation results based on TOA measurements of the 3rd Generation Partnership Project (3GPP) 5G New Radio positioning&#13;
reference signal (PRS) demonstrate that the proposed algorithm for JCLS significantly improves localization and synchronization accuracy compared to non-cooperative methods.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163013</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multipartite Quantum Clock Synchronization Via Collective Symmetric States</title>
<link>https://hdl.handle.net/1721.1/163012</link>
<description>Multipartite Quantum Clock Synchronization Via Collective Symmetric States
Keskin, Ufuk
This thesis investigates multipartite quantum clock synchronization (QCS) tasks using a class of quantum states, called collective symmetric (CS) states, which generalize Dicke and N00N states. Employment of CS states in previous QCS procedures is shown to improve synchronization performance in various network scenarios. The focus of the paper is on QCS procedures that, after the distribution of quantum states, rely exclusively on local operations and classical communication (LOCC), ensuring compatibility with highly noisy quantum channels. Two synchronization scenarios are considered: (i) synchronization between the two nodes of an arbitrarily chosen pair of nodes, and (ii) global synchronization where all nodes wish to synchronize their clocks to a common average time. First, a framework in which the previous procedures operate employing the CS states is introduced. Using such framework, possible limitations of the QCS procedures in terms of estimation ambiguity and lack of robustness are pointed out. Second, a procedure referred to as the tactical delay procedure (TDP) is proposed for each of the two synchronization scenarios. The TDP resolves the mentioned limitations and outperforms the state-of-the-art multi-partite QCS procedures in terms of synchronization precision without requiring additional quantum resources.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163012</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Embedded HOWFSC Algorithms</title>
<link>https://hdl.handle.net/1721.1/163011</link>
<description>Accelerating Embedded HOWFSC Algorithms
Eickert, Brandon
The quest to directly image planets of other solar systems demands not only state-of-the- art coronagraphs, but also places extreme performance demands on space-based processors. Direct imaging requires precise wavefront control to acquire the 1010 contrast necessary to reveal a dim, Earth-like exoplanet. This precise level of control is only possible if high-order wavefront sensing and control (HOWFSC) algorithms are executed with enough speed to offset wavefront error accumulation. Of the many aspects that make high-contrast imaging difficult, a central bottleneck is the speed at which we can run these algorithms. At the center of this work, we aim to accelerate the execution of two foundational HOWFSC algorithms: optical modeling and Electric Field Conjugation (EFC). Optical modeling underpins both Jacobian-based EFC, and a relatively new variant of EFC, called adjoint-based EFC.&#13;
The two main contributions of this thesis are to port bottleneck HOWFSC algorithms to the relevant computing environments, and quantify speedups attained by both algorithm choice and implementation optimization. This work explores the acceleration of optical modeling for a vector vortex coronagraph through the use of the FFTW library, and the acceleration of EFC by implementing adjoint-based EFC in an embedded context. We utilize functional analogs to radiation-hardened processors, using the NXP T1040 in place of the BAE RAD5545, and the NXP LS1046 in place of the LS1046-Space. We find that the FFTW library enabled a factor of six speedup for 4096 × 4096 fast Fourier transforms (FFTs), and a factor of five for 2048 × 2048 FFTs. With these significant speedups, the bottleneck within the vortex operations of the optical model shifts from the FFT to matrix multiplication. We additionally time the execution of the underlying routines of Jacobian-based EFC and AD-EFC to estimate that AD-EFC is 46 times faster than Jacobian-based EFC. Despite these speedups, AD-EFC is still a factor of 124 away from 100-second latency for our specific optical model. These results demonstrate that one to two orders of magnitude of speedup must be attained by either further optimizing algorithm implementations, or exploring other parallelization strategies, computing architectures, and mission paradigms to achieve a latency on the order of 100 seconds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163011</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formalizing Causal Models Through the Semantics of Conditional Independence</title>
<link>https://hdl.handle.net/1721.1/163010</link>
<description>Formalizing Causal Models Through the Semantics of Conditional Independence
Zhang, Anna
Many foundational tools in causal inference are based on graphical structure and can involve complex conditions that obscure the underlying causal logic. Given the inherent complexity and subtlety of cause-and-effect phenomena, establishing formal guarantees about these tools is both challenging and important. This thesis presents a semantics-driven formalization of causal models within the Coq proof assistant, enabling precise, mechanized reasoning about causal relationships. Central to this work is a new function-based definition of conditional independence, which captures how changes propagate through a causal graph. We prove that this semantic notion is equivalent to the standard graphical criterion of d-separation, thereby establishing a rigorous bridge between structural and semantic interpretations of independence. The formalization includes a library of graph-theoretic and causal-reasoning tools, encompassing key concepts such as mediators, confounders, and colliders. By linking the syntactic and semantic perspectives on causality, this work lays a robust foundation for formally verifying causal assumptions and guiding experimental design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163010</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination</title>
<link>https://hdl.handle.net/1721.1/163009</link>
<description>Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination
Zhang, Jackson
Embodied multi-agent systems, comprising autonomous agents interacting within shared environments, enable intelligent, collaborative solutions for tasks requiring real-time coordination and adaptability. While applications span diverse fields, from disaster response to healthcare, planning in these systems remains challenging due to partial egocentric observations and limited environmental awareness. This work addresses these challenges by introducing a software module that synthesizes a shared world state from individual agent views, maintaining spatial information about objects and agents to support more effective joint action planning. Integrated into the LLAMAR framework, this module aims to improve planning accuracy and efficiency. The proposed approach is evaluated using metrics such as success rate, transport efficiency, and coverage performance. Our evaluation demonstrates that utilizing a perfect (oracle-generated) world state significantly enhances planning effectiveness. Notably, under these ideal conditions, the success rate of the LLAMAR planner improved by over 16%. These findings underscore the critical impact of accurate world state representation on multi-agent performance and highlight the potential for significant advancements in collaborative task execution in dynamic, unstructured settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163009</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization, Processing, and Synthesis of Extreme-Performance Continuous Carbon Nanotube Network Composites</title>
<link>https://hdl.handle.net/1721.1/163008</link>
<description>Characterization, Processing, and Synthesis of Extreme-Performance Continuous Carbon Nanotube Network Composites
Durso, Michael Nathan
Continuous carbon nanotube (CNT) networks are an emerging, hierarchically-structured, and commercially available nanomaterial built from countless CNT nanocrystals. These macroscopic yarn materials promise to bridge the gap between microscopic CNT fibers – which are well-known for their superlative material properties – and human-scale fiber reinforcements for extreme-performance composites. Yet because the constituent CNTs interact only via intermolecular forces, network properties fall short of their building blocks. Although these materials show promise as reinforcement in composites, the networks’ low-permeability and tortuous nanoporous structure renders imbibition with liquids like a polymer matrix or surface functionalizing agents challenging. Thus, traditional composite fabrication strategies can be ineffective when applied to CNT yarns, especially commercial products subject to proprietary microstructural manipulation.&#13;
&#13;
Using commercially-available CNT yarns fabricated through floating-catalyst chemical vapor deposition (FCCVD) as model systems, we first explore yarn characteristics which are unique to their hierarchical, bundled-fiber structure, placing focus on the oxygen-rich amorphous carbon phase found in pre-densified, chemically-stretched yarns. A green hydrothermal technique is explored to remove this phase from the surface level inward, allowing for purification and improved infiltrability. However, we find this phase is distinct from previously-reported amorphous carbons found in CNTs, showing it behaves as a matrix which may improve polymer bonding. An analysis of imbibition and fluid transport in these CNT yarns finds that while infiltration of low-viscosity liquids like water is thermodynamically-favored, it is limited when surpassing the threshold of capillary pore percolation. Nevertheless, infiltration in lower-density networks is not only observed, but exploited through the demonstration of dielectric heating in a microwave reactor, where we show fluid imbibed within the network can be boiled to induce swelling and exfoliation of CNT bundles (or conversely, this may be avoided) through optimization of the heating parameters and solvent.&#13;
&#13;
Next, with a firm understanding of the yarn networks’ properties and the impact of various processing effects, we demonstrate two techniques of producing polymer in-situ using dissolved monomers to side-step slow infiltration. The first technique is in-situ interfacial polymerization (ISIP), which is adapted to the yarns studied in this work to yield polyetherimide–CNT yarn composites. When applied to chemically-stretched yarn, specific strengths as high as 2.2 GPa/(g-cm3) are achieved in the flexible and durable yarn composite. We show parameters and conditions which maximize tensile properties and challenges associated with the rapid nature of the process, concluding with the successful demonstration of a roll-to-roll fabrication scheme for producing arbitrary amounts of polymer.&#13;
&#13;
In our second technique, we produce extreme-performance polyimide and polybenzimidazole composites through green in-situ polymerizations (ISSP) in CNTs and macroscopic fiber networks. This approach utilizes superheated water and alcohol as a powerful medium to disperse monomers and initiate polymerization of high-performance coatings within a porous network. We demonstrate ISSP-CNT composites with variable coating morphologies (conformal, shish-kebab, etc.), in-air stability to over 500°C, and doubled specific stiffness and specific strength. Finally, we validate the multifunctional behavior of polyimide-CNT composites by showing a strong, flexible composite can store energy and behave as a free-standing battery electrode.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163008</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft</title>
<link>https://hdl.handle.net/1721.1/163007</link>
<description>Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft
Shafer, Emma
Thermochromic variable emissivity materials (VEMs) are a relatively new passive thermal control technology used for spacecraft radiators. VEMs passively change their emissivity based on their temperature, with VEMs having low emissivity at low temperatures and high emissivity at high temperatures. This property of VEMs allows for spacecraft to have reduced heater power and less extreme temperature swings without adding active thermal control systems. There is a potential for VEM technology to become more widely used in spacecraft radiators. Because thermochromic VEMs are still a relatively new technology, there has not yet been a study with a parametric sweep of some possible VEM profiles and common spacecraft parameters to determine the best-case uses of particular VEM profiles. This thesis models a single-node spacecraft in an equatorial low Earth orbit, varying the spacecraft’s shape, surface area, and thermal mass using Thermal Desktop. The temperature history of the spacecraft in orbit, particularly its orbit minimum temperature, orbit maximum temperature, orbit average temperature, and orbit temperature range, is recorded, and twelve VEM profiles are compared against default black and white paint materials to see how the twelve VEM profiles change orbit minimum temperature, maximum temperature, average temperature, and temperature range. The desired outcome is for the VEMs to reduce the temperature range the most compared to black or white paint while keeping temperatures within typical temperature requirements for spacecraft components. It is found that, compared to white paint, VEMs always increase the orbit minimum temperature, maximum temperature, average temperature, and temperature range across all nodal thermal masses and surface areas studied. For spacecraft with lower surface areas, having only white paint decreases the temperature too much for typical spacecraft components, so even though white paint always decreases temperature range compared to VEMs, it is recommended to have VEMs instead of white paint for lower surface area spacecraft due to VEMs being better than white paint at keeping components within typical temperature requirements. When VEMs are compared to black paint, it is found that black paint has lower minimum temperatures and greater maximum temperatures than all VEMs at greater surface areas. For lesser surface areas, the node covered in black typically has minimum and maximum temperatures in the middle of the VEMs’ minimum and maximum temperatures. For all surface areas and thermal masses, the average temperature of the black node is typically in the middle of the average temperatures of the nodes with VEMs; in relation to the VEMs’ average temperatures, the black average temperature decreases as node height increases. For all node heights and thermal masses, VEMs always decrease the temperature range compared to black. VEMs are shown to be better than black paint in having spacecraft components stay within typical temperature requirements, and which VEM to choose depends on what the specific spacecraft component is and its specific temperature requirements. The biggest difference in individual VEM profiles compared to each other is the orbit average temperature; the lower the VEM’s transition temperature, the lower the average temperature. Only at the greatest nodal surface areas and smallest nodal heights is there a significant difference in temperature range between individual VEM profiles; typically, the lower the transition temperature of the VEM, the less its temperature range. Future work includes expanding on the parameters studied and studying spacecraft in different orbits, different spacecraft shapes, and different VEM profiles.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163007</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies</title>
<link>https://hdl.handle.net/1721.1/163006</link>
<description>Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies
Ahlers, Matthew C.
Autonomous sailing vessels offer a promising solution for maritime research, offering low maintenance and sustainable platforms for environmental monitoring and data collection. These vessels utilize wind power, eliminating the need for conventional fuel and enabling long-duration operations with minimal environmental impact. Their applications range from oceanographic studies to maritime surveillance, where persistent and autonomous data collection is essential. This thesis explores the challenges and methodologies associated with path planning for autonomous sailing, particularly in the context of survey operations. Unlike traditional motorized vessels, sailing autonomy must account for wind variability, sail dynamics, and limited maneuverability, requiring specialized path-planning techniques to ensure efficient and reliable navigation. The research investigates various sail and hull configurations, the dynamics of windpowered propulsion, and the application of autonomy frameworks such as MOOS-IvP. A key focus is on optimizing continuous coverage path planning (CPP) to maximize efficiency while adapting to environmental constraints. By integrating real-time wind data and vessel performance characteristics, the study refines survey strategies that enhance mission effectiveness. Different survey strategies are implemented and evaluated using both simulation and real-world testing on the Charles River. These trials demonstrate the feasibility of fixed-path decomposition approaches and adaptive moving horizon control methods, evaluating methods with the impact of wind conditions on autonomous sailing performance. The results contribute to the development of robust and efficient survey strategies that improve the autonomy and reliability of wind-powered marine vessels.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163006</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Census-Based Population Autonomy for Marine Robots: Theory and Experiments</title>
<link>https://hdl.handle.net/1721.1/163005</link>
<description>Census-Based Population Autonomy for Marine Robots: Theory and Experiments
Paine, Tyler
Collaborating groups of robots show promise due in their ability to complete missions more efficiently and with improved robustness, attributes that are particularly useful for systems operating in marine environments. A key issue is how to model, analyze, and design these multi-robot systems to realize the full benefits of collaboration even with limited communication, a challenging task since the domain of multi-robot autonomy encompasses both collective and individual behaviors. This thesis presents a layered model of multi-robot autonomy that uses the principle of census, or a weighted count of the inputs from neighbors, for collective decision-making coupled with multi-objective behavior optimization for individual decision-making. The census component is expressed as a nonlinear opinion dynamics model and the multi-objective behavior optimization is accomplished using interval programming. This model can be reduced to recover foundational algorithms in distributed optimization and control, while the full model enables new types of collective behaviors that are useful in real-world scenarios. To illustrate these points, a new method for distributed optimization of subgroup allocation is introduced where robots use a gradient descent algorithm to minimize portions of the cost functions that are locally known, while being influenced by the opinion states from neighbors to account for the unobservable costs. With this method the group can collectively use the information contained in the Hessian matrix of the total global cost. In addition, the critical issue of controlling subgroup size to minimize a collective cost signal is addressed, an initial step toward establishing a general definition of controllability of the nonlinear opinion dynamics model. The utility of this model is experimentally validated in three categorically different experiments with fleets of autonomous surface vehicles: an adaptive sampling scenario, a high value unit protection scenario, and a competitive game of capture the flag.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163005</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming</title>
<link>https://hdl.handle.net/1721.1/163004</link>
<description>Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming
Hao, Yilun
While large language models (LLMs) have recently demonstrated strong potential in solving planning problems, LLMs, as zero-shot planners themselves, are still not capable of directly generating valid plans for complex planning problems such as multi-constraint or long-horizon tasks. This motivates the needs to develop a robust and reliable planning system for complex real-world planning problems. Furthermore, many frameworks aiming to solve complex planning problems often rely on task-specific preparatory efforts, such as task-specific in-context examples and pre-defined critics or verifiers, which limits their cross-task generalization capability. This motivates the needs to extend the robust and reliable planning systems to have strong generalization capability. In this thesis, we first develop an LLM-based planning framework that formalizes and solves complex multi-constraint planning problems as constrained satisfiability problems and can reliably identify the unsatisfiable cores for unsatisfiable requirements, provide failure reasons, and offers personalized modification suggestions. Then, we generalize the paradigm by proposing a general-purpose framework that leverages LLMs to capture key information from planning problems and formally formulate and solve them as optimization problems from scratch, with no task-specific examples needed. Comprehensive experimental results have shown that our frameworks significantly outperform the baselines and have strong performance across tasks and LLMs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163004</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Quantification of the Phonon Drag Deformation Mechanism in Metals at Extreme Strain Rates</title>
<link>https://hdl.handle.net/1721.1/163003</link>
<description>Experimental Quantification of the Phonon Drag Deformation Mechanism in Metals at Extreme Strain Rates
Dowding, Ian
Extreme strain rate deformations, above 10⁶ s⁻¹, are seen across many fields of science and engineering; from meteorite impacts and impact induced crystallographic phase changes to high-speed machining and additive manufacturing. Despite the range of applications, many common high-rate impact experiments are intrinsically limited to strain rates of only 10⁴ s⁻¹ before complicating the material deformation with a superimposing state of shock due to high impact pressures. However, recent advances in optically driven microballistics using laser induced projectile impact tests have provided a new quantitative look into extreme mechanics of materials, at rates above 106 s-1 and well below the onset of shock effects.&#13;
As deformation strain rates increase, additional strengthening mechanisms in metals become available, leading to a change in the underlying physics of dislocation motion and an increase in strength. This thesis first explores the mechanical properties of pure metals when deformed at extreme strain rates − both in ambient conditions and elevated temperatures. Using an array of complimentary characterization methods, two independent measurements of strength, the dynamic strength and dynamic hardness, are assessed. As the temperature is increased from ambient, the strength and hardness of pure metals both increase an appreciable amount. At these deformation rates, conventional thermal softening effects are now in competition with anti-thermal hardening that arises from ballistic transport of dislocations from phonon interactions in the crystal lattice. These effects are quantified systematically and it is shown that the anomalous thermal strengthening seen is, thermodynamically and kinetically, the expected form of plasticity under these impact conditions.&#13;
Next, the limits of where this anomalous thermal strengthening occur in metals are investigated. First, solute elements are added to pure Ni to evaluate how additional dislocation pinning mechanisms effect the strength at ambient and elevated temperatures during extreme strain rate deformations. The strengthen increase due to solute pinning of dislocations is additive to the other strengthening mechanisms, yet thermally controlled, which provides a transition from ballistic transport of dislocations to thermally activated strengthening at a critical concentration of solutes. Finally, the upper bound of temperature for dislocation phonon drag strengthening is assessed. While it was shown that pure metals increase strength with increasing temperature, this “hotter-is-stronger” trend breaks down as the temperature approaches the melting point of the metal. Using Sn, due to its low melting temperature, the breakdown from “hotter-is-stronger” to “hotter-is-softer” as the initial substrate temperature approaches the melting temperature is systematically explored.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163003</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency</title>
<link>https://hdl.handle.net/1721.1/163002</link>
<description>Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency
Plaza Rivera, Christian O.
Lithium (Li)-metal batteries (LMBs) present a promising avenue for high-energy applications. However, their practical adoption is constrained by challenges such as dendrite formation and unstable interphases. This study investigates the intricate interplay between electrolytedependent thermodynamics, kinetics, and transport properties in LMBs, focusing on the concentration effects in fluoroethylene carbonate (FEC) and 1,2-dimethoxyethane-based electrolytes containing lithium bis(fluorosulfonyl)imide. Due to FEC’s unique properties, these electrolytes facilitate significant upshifts in the Li redox potential and contribute to stable interphases and voltage profiles. Our findings reveal that the redox potential is primarily governed by the solvent’s electron-donating ability, reflecting underlying solvation dynamics, while the electrolyte permittivity influences reaction entropy trends. The results show entropy changes from increased molecular disorder at moderate concentrations to reduced entropy in highly concentrated regimes, driven by the formation of ion–solvent complexes. Kinetic analyses demonstrate a volcano-shaped dependence of exchange current density on concentration, centered at 2 M. Two prevailing perspectives propose that either kinetic–transport interplay or thermodynamic properties govern Coulombic efficiency (CE). However, separating these contributions is complex, since both higher exchange current density and upshifts in the Li redox potential enhance CE. Furthermore, CE strongly aligns with the combined effects of kinetics, thermodynamics, and transport, emphasizing the need for a holistic electrolyte design approach. Optimizing these three factors makes it possible to stabilize the interphase, promote uniform Li deposition, and elevate the overall safety and performance of next-generation LMBs.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163002</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications</title>
<link>https://hdl.handle.net/1721.1/163001</link>
<description>1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications
Shevgaonkar, Mihir
Electroaerodynamic (EAD) propulsion is a novel form of propulsion that is nearly silent and has no moving parts. The first functional untethered heavier-than-air EAD aircraft had an endurance of 90 seconds and could only fly in a straight line. To enable a practical fixed wing EAD aircraft that can fly outdoors with a payload for an extended period of time, improved power conversion technology is necessary. Prior work specifies a practical EAD aicraft as one with an endurance of 10 minutes, a payload capacity of 200 g, and full controllability. This work explores methods of increasing the specific power of power converters for EAD aircraft from 1.15 kilowatts per kilogram to over 2.0 kilowatts per kilogram. Such an increase can be achieved by utilizing magnetics integration and thermal management techniques, as well as adjustments in the operating point of the power converter. The power converter for the first generation EAD aircraft had an input voltage of 200 V, an output voltage of 40 kV, an output power of 600 W, a specific power of 1.15 kilowatts per kilogram, and an efficiency of 85 percent. In this work, a power converter with an input voltage of 200 V, an output voltage of 20 kV, an output power of 1476 W, a specific power of 2.7 kilowatts per kilogram, and an efficiency of 96 percent was demonstrated to work for a 40 second duration. At the end of the test, device temperatures continued to increase, so it has not been proven that the converter can work in thermal steady state as required for a 10 minute flight. Future work would involve modifying the test setup to allow for adequate ventilation of the ambient air around the converter, as well as modifying the converter with adequate thermal management so as to enable operation under thermal steady state.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163001</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise</title>
<link>https://hdl.handle.net/1721.1/163000</link>
<description>Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise
Cezairli, Mina
Operational interventions, such as enabling more fuel-efficient trajectories, are desirable in mitigating the environmental impact of air travel due to their relatively fast implementation potential. In particular, the vertical inefficiency arising from the altitude stratification in the airspace can be mitigated by relaxing vertical constraints. The feasibility of vertical flexibility is evaluated by quantifying the rate of close encounters and the frequency of alerts that would be needed to prevent them. Substantial diurnal variability in the number of close encounters was found in the airspace, with lower rates of events during the nighttime period. Furthermore, regional differences among Air Route Traffic Control Centers were observed in the number of close encounters. The frequency of controller intervention events that would have to occur was evaluated at 25 NM and 50 NM alerting distance levels, and it was found that, given sufficient technological capabilities for alerting at the 25 NM reaction distance, most centers would have fewer than 10 alerts per hour during the nighttime period. Boston, Miami, and Seattle appeared especially promising, with approximately one alert per hour for each region. Finally, the potential fuel benefit from enabling vertically optimal trajectories was estimated to be up to 100,000 gallons of fuel savings per month in the case of a CONUS-wide nighttime implementation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163000</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions</title>
<link>https://hdl.handle.net/1721.1/162999</link>
<description>Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions
Zhang, Joseph
Understanding the interaction between weather and disruptions in complex air transportation network is important to the design and evaluation of preemptive measures and responses taken by air traffic managers. However, the occurrence of disruptive weather events is often rather limited compared to the amount of data available for nominal operations.  Additionally, in large-scale systems with many known and unknown confounding factors, it can be difficult to identify the relevance of existing data to different underlying distributions of interest. Furthermore, existing work generally follows a frequentist paradigm in predicting disruptions based on weather, and does not easily lend itself to inferring the causes of disruptions, which can be important both in building models and using them to make predictions, and generate test cases to stress-test proposed design decisions. In this thesis, we develop a hierarchical Bayesian model for air traffic network operations, and investigate methods for learning these models in data-constrained settings, by extend existing work on retrospectively analyzing failures. We also include a guiding case study performed on LaGuardia Airport, in which a generative model is developed for the interaction between weather conditions and airport-level parameters within a single airport, trained on unlabeled historical data, and evaluated by simulating disruptions on historical schedules.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162999</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS</title>
<link>https://hdl.handle.net/1721.1/162998</link>
<description>Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS
Wu, Ivy
σOS aims to provide both serverless and stateful support to cloud applications while maintaining strong isolation, security, and efficient startup times and scheduling among multiple users. While σOS and its container startup times have been successfully benchmarked for tasks written, compiled, and statically linked in Golang and Rust, it currently lacks support for other languages, including interpreted ones like Python. To bridge this gap, this paper presents the first integration of an interpreted language into σOS, enabling native Python support without compromising the system’s core principles. Our design, σPy, achieves this through three key ideas: (1) system call interposition via LD_PRELOAD to enable just-in-time dependency management, where Python libraries are fetched on-demand from tenant-specified AWS S3 buckets, avoiding overhead during container initialization; (2) a multi-layered mount namespace that spans the local machine, a per-realm Docker container, and a per-proc σcontainer, enabling efficient dependency caching at the per-tenant granularity; and (3) a hybrid C++, C, and Python API layer that bridges σOS’s Protobuf-based RPC system with Python’s dynamic types. Preliminary benchmarks demonstrate that σPy achieves performance comparable to that of compiled languages like Golang when interacting with the σOS API, with only 0.2 - 0.3 additional milliseconds of overhead on all tested API calls, validating the success of Python programs on the σOS architecture.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162998</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating LLM Runtime Latency</title>
<link>https://hdl.handle.net/1721.1/162997</link>
<description>Simulating LLM Runtime Latency
Wang, Sarah Y.
Large Language Models (LLMs) are expensive to run and can incur high latencies. Each LLM application has its own cost and latency targets. For example, AI voice assistants operate under low latency objectives, while large document batch processing jobs are typically cost-sensitive. However, navigating these trade-offs is not trivial, as LLM latency is highly task– specific and depends on factors such as the offered query load, the hardware configurations, request properties, and various model characteristics. To support the user in configuring their deployment according to their application needs, we introduce vLLMSim, an accurate simulator that estimates the latency of a given workload on different hardware configurations. vLLMSim advances two key avenues toward latency-aligned LLM deployments. First, the simulated latency metrics inform the user’s model and hardware choice, so they can use a configuration that is ideal for their workload. Second, our simulator enables researchers to quickly test latency-improving ideas, bypassing the need for time-consuming implementations before validating their effectiveness. In fact, vLLMSim is already used in two research projects with the goal of reducing latency and cost of LLM inference. In this thesis, we show how vLLMSim’s design allows it to accurately support the use cases above, while providing highly accurate runtime predictions. To support hardware exploration without GPU access, vLLMSim provides precomputed performance profiles that are sufficient to accurately simulate the user’s workload. The simulator code can be found here, and the instrumented vLLM code for creating profiles can be found here.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162997</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Latent Space Interpretation via In-the-loop Fine-Tuning</title>
<link>https://hdl.handle.net/1721.1/162996</link>
<description>Methods for Latent Space Interpretation via In-the-loop Fine-Tuning
Wen, Collin
With language models increasing exponentially in scale, being able to interpret and justify model outputs is an area of increasing interest. Although enhancing the performance of these models in chat mediums has been the focus of interaction with AI, the visualization of model latent space offers a novel modality of interpreting information. Embedding models have traditionally served as a means of retrieving relevant information to a topic by converting text into a high-dimensional vector. The high-dimensional vector spaces created via embedding offer a way to encode information that captures similarities and differences in ideas, and visualizing these nuances in terms of meaningful dimensions can offer novel insights into the specific qualities that make two item similar. Leveraging fine-tuning mechanisms, dimension reduction algorithms and Sparse Autoencoders (SAEs), this work surveys state-of-the-art techniques to visualize the latent space in highly interpretable dimensions. ConceptAxes, derived from these techniques, is a framework is provided to produce axes that can capture high-level ideas that are ingrained into embedding models. ConceptAxes with highly interpretable dimensions allow for better justification for the latent space and clusters. This method of increasing embedding transparency proves valuable in various domains: (1) AI-enhanced creative exploration can be more guided and customized for a particular experience and (2) high-level insights can be made more intuitive with vast text datasets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162996</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link>https://hdl.handle.net/1721.1/162995</link>
<description>Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission
Whitmore, Garrett
This work outlines the software-related requirements necessary for successful operations of the NASA-sponsored Cubesat Laser Infrared CrosslinK (CLICK) B/C mission [1] [2]. This twin-cubesat mission will demonstrate peer-to-peer laser-communication capabilities novel at this small terminal scale. Optical laser communication terminals can have lower Size, Weight, and Power (SWaP) compared with traditional radio communication, as well as fewer licensing regulations and improved link security. CLICK-B/C follows from CLICK-A, a risk-reduction mission that successfully performed laser downlink with a ground station at MIT [3]. In addition to downlink, B/C will perform crosslink experiments at a data transmission rate over 20 Mbps at ranges between 20 and 580 km in Low-Earth Orbit (LEO). This thesis focuses on the software related to the function of the satellite payload, in particular, the improvements and additions made to the operating system, software systems that were ported over from CLICK-A, the integration and testing of these subsystems, and analyses done to prepare for in-flight operations before launch. An overview of the MIT &amp; UF payload hardware and electronics is given before detailing interactions with components as necessary. A deep dive into the payload software libraries, internal and external communication channels, and operating system build details are given. A description of functional testing and its results are laid out as well as a template crosslink experiment script and further specifications for mission-related analyses and pre-launch preparations. This work on software upgrades, verification, and examination is necessary for CLICK-B/C to reach its stated mission goals, here on Earth and in its orbit.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162995</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs</title>
<link>https://hdl.handle.net/1721.1/162994</link>
<description>Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs
Tockman, Andrew
The field of formal methods has a rich history of practical application in verification of the correctness of software. Existing verification tooling operates at a wide range of rigor, from proving relatively weak properties via traditional static analysis to powerful theorem provers that can express very precise specifications. It is sometimes desirable to prove properties about programs that make reference to not just semantic behavior but also to other metaproperties of the program’s execution, such as runtime or I/O histories. There is also a wide variety of existing tooling for proving bounds on program runtime. However, there is no prior work on a maximally rigorous verification system that can prove predicates involving all of semantic behavior, runtime, and I/O. Our contribution is exactly that – we extend the existing Bedrock2 framework, which implements a C-like systems language within a powerful proof engine together with a verified compiler capable of expressing arbitrary proof conditions involving behavior and I/O, and augment it to add the capacity to reason about runtime as well. As a capstone proof of concept, we apply the new metrics machinery to an IoT lightbulb controller (already verified with respect to the previous framework) and produce a new specification with time bounds based on arrival of network packets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162994</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task</title>
<link>https://hdl.handle.net/1721.1/162992</link>
<description>Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task
Rozario, Consecrata Maria
Graph Neural Networks (GNNs) have become a widely utilized tool in recommender systems in various contexts. While recommendation tasks can be approached using a multitude of data structures and types, graph-structured data is particularly well-suited for this domain, as graphs naturally capture a variety of relationships and interactions between entities. By leveraging graph representation learning, we can effectively encode these complex dependencies, enabling robust and context-aware recommendations. We use this methodology in the domain of policy recommendations for urban centers. To recommend policies, we would learn the complex local and global relationships between cities, their environmental features, and currently implemented policies. We construct a graph structure relating cities, implemented policies, and city features, and formulate the policy recommendation task as a GNN link prediction problem, demonstrating its potential to scale data-driven urban governance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162992</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech</title>
<link>https://hdl.handle.net/1721.1/162991</link>
<description>Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech
Park, Janette H.
This study presents a framework for the automatic detection of the eight landmark acoustic cues in human speech. Landmarks are key articulatory events, produced as a result of minimal vocal tract constriction (e.g., vowels and glides) or closures and releases in the oral region (e.g., nasal, fricative, and stop consonants). A complete landmark detection system is a key step towards an overarching speech analysis system that relies on lexical acoustic cues, as landmarks guide the identification of other acoustic cues in speech. In the proposed framework, the acoustic properties of each of the eight landmark cues are modeled by extracting speech-related measurements and training Gaussian Mixture Models (GMMs). To remove the effects of speaker variability and different recording environments, methods for normalizing speech-related measurements are proposed and evaluated. For a new speech signal, the normalized speech-related measurements are extracted at each time frame and evaluated against the eight trained GMMs to compute the likelihood of each landmark. Using Bayes’ Theorem, the posterior probabilities are calculated to determine the most probable landmark (or absence thereof) at each time frame. The system’s performance is evaluated by comparing the detected landmarks to the manually labeled ground truth landmark annotations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162991</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation</title>
<link>https://hdl.handle.net/1721.1/162990</link>
<description>Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation
Lin, Vincent
As single-cell transcriptomics datasets continue to grow in size and biological complexity, current models for cell type annotation remain limited in their generalizability and are often evaluated on only a small fraction of the standardized cell types defined in modern ontologies. Current state-of-the-art models for transcriptomic representation demonstrate that deep learning models can extract rich features on single-cell data but are evaluated on very few cell types and perform poorly on broader datasets. This work introduces a multimodal model architecture that integrates large language models (LLMs) with gene expression encoders to address this scalability gap in cell type annotation. Inspired by vision-language frameworks, our architecture combines a pretrained scRNA encoder with a Perceiver Resampler that maps gene expression profiles into the latent space of a large language model. We construct structured, ontology-grounded datasets of up to 197 cell types and evaluate our model's performance using instruction fine-tuning. Our experiments analyze the impact of integrating language modeling components with scRNA encoders and their benefit on cell type annotation performance for large, diverse datasets. Our results show that while a scRNA encoder may be sufficient for small datasets, our single-cell model leveraging LLMs consistently outperforms the scRNA encoder baseline on larger datasets, with a widening gap in classification performance as data complexity increases, demonstrating the scalability and improved generalizability of our multimodal architecture. We also provide further analysis of the tradeoffs associated with using the natural language domain for biological analysis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162990</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference Time Search for Protein Structure Prediction</title>
<link>https://hdl.handle.net/1721.1/162989</link>
<description>Inference Time Search for Protein Structure Prediction
Qi, Richard
Scaling inference-time compute for deep learning models has led to superhuman performance in games and enhanced reasoning capabilities for language models. However, similar gains have not yet been made in the field of biomolecular structure prediction. We introduce a new paradigm for inference-time search by adding architectural components and a finetuning procedure to state-of-the-art structure prediction models that give rise to a discrete latent space. We implement algorithms for searching and sampling in this discrete latent space and conduct experiments on a small model, demonstrating an increase in oracle and top-1-selected accuracy for predicted protein-protein complex structures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162989</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight</title>
<link>https://hdl.handle.net/1721.1/162988</link>
<description>Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight
Chu, Kaitlyn A.
Lower Body Negative Pressure (LBNP) has long been explored as a countermeasure to the physiological deconditioning and orthostatic intolerance associated with prolonged microgravity exposure. Traditional LBNP systems, however, are large, stationary devices that require astronauts to remain immobile during use, limiting their integration into daily spaceflight routines. Although more mobile LBNP solutions have emerged, they remain cumbersome and uncomfortable, ultimately still restricting multitasking and reducing operational feasibility. This study introduces the Soft Kinetics INterface (S.K.I.N.), a flexible, wearable structure designed to support the application of localized LBNP. The goal was to evaluate whether targeted negative pressure applied through the S.K.I.N. could replicate the fluid shift effects of a traditional LBNP chamber while improving comfort, mobility, and time-efficiency. The human thigh was chosen as the focus of this technology demonstration due to its known responsiveness to LBNP and its suitability for small-scale implementation. The development of the S.K.I.N. began with finite element modeling (FEM) to identify optimal material properties and structural geometry. Iterative physical prototyping resulted in a sinusoidal silicone waveform design, selected for its mechanical stability and user comfort. The final prototype was then evaluated in three experimental phases: (1) mechanical testing using pressure-sensitive film to assess structural integrity under vacuum, (2) an ex-vivo pig leg study to validate experimental protocols and assess the S.K.I.N.’s ability to induce fluid shifts, and (3) a human study (n=10) comparing fluid shifts between the S.K.I.N. and a scaled-down version of the traditional LBNP chamber. On average, results from the human study showed that the S.K.I.N. successfully induced localized fluid shifts similar to those of the chamber. However, response magnitude varied considerably across participants. Most of the observed effect was driven by female participants, who exhibited more pronounced fluid shifts, while most male participants showed minimal or no measurable response. FEM simulations supported this finding, suggesting that higher fat-tomuscle ratios — more common in women — may enhance tissue deformability and volume displacement, thereby facilitating greater fluid shifts under negative pressure. Although these differences limit generalizability, they also highlight the potential for the S.K.I.N. to serve as a more targeted countermeasure for specific physiologies or user groups. Although the current S.K.I.N. design’s limited surface area constrains its overall effect, the concept shows promise. The ability to deliver targeted fluid shifts in a more mobile, comfortable format could enable integration into dynamic operational settings. Future work should focus on expanding the system to cover larger areas, such as a whole-pants version, and incorporating a portable vacuum source for mobility in both spaceflight and terrestrial applications. Larger, more diverse participant cohorts will also be necessary to assess long-term usability, efficacy, and individual variability in response.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162988</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal</title>
<link>https://hdl.handle.net/1721.1/162987</link>
<description>Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal
Patterson, Lydia J.
Complex diagrams and charts can be difficult for people who use screen magnification to navigate. A sense of spatial context and of the diagram’s overall structure is oftentimes lost, as magnifiers can only magnify a fraction of the screen at any given time. So, while sighted users have both clarity and full context simultaneously, screen magnifier users often have to choose or split their attention between the two. Existing screen magnifiers are content-agnostic, so the current way of navigating visualizations is freeform and unguided. The burden of figuring out where to explore while retaining a mental model of the diagram is placed entirely on the user. In this paper, we present Mantis—six prototypes of an automatic, content-aware screen magnification tool designed to aid people who have low vision in the traversal of diagrams. Each design experiments with what sorts of information might be provided to help the user retain a sense of context. Further, they each explore how such a tool might use its knowledge of the diagram’s semantic structure to streamline traversal to and from areas of interest to the user. To this end, we evaluate how these proof-of-concepts improve the user’s navigational experience and reduce the user’s cognitive load.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162987</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard</title>
<link>https://hdl.handle.net/1721.1/162986</link>
<description>Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard
Luong, Jacky K.
Teaching tools such as the Tragedy of the Commons (ToC) participatory simulation, developed by MIT STEP Lab, have the potential to develop different skills or knowledge compared to single-player educational games. ToC illustrates the challenges of managing shared resources, but its existing teacher dashboard may not be well-suited to support its growing use across various classrooms. Through surveying and interviewing educators along with observing classroom usage, the software's shortcomings and opportunities for improvement were identified. This resulted in the design and implementation of a redesigned teacher dashboard, including a new “central bank” feature that provides structure to support more complex simulations. Additional enhancements improved usability and performance. Evaluations with teachers and controlled playtests demonstrated that these changes show promise in enabling richer classroom dynamics and making facilitation easier. The findings underscore the importance of teacher experience in educational game design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162986</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists</title>
<link>https://hdl.handle.net/1721.1/162985</link>
<description>All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists
Liu, Andi
This thesis tests two design questions for Large Language Model (LLM) Chatbot Therapists: Which therapeutic school suits an LLM best, and does an explicit Theory-of-Mind (ToM) reflection improve outcomes? We prompted GPT-4.1-mini to act as eight therapists — CBT, Narrative, Psychodynamic, and SFBT, each with and without a ToM step — and held 240 simulated sessions with scripted AI patients. SFBT achieved the greatest projected PHQ-9 improvement (around 4 points), significantly higher than CBT, Narrative, or Psychodynamic approaches. Immediate distress (SUDS) fell modestly and uniformly across schools. ToM reasoning did not alter either measure. The findings show that extra “thinking time” might not automatically translate into therapeutic gain, but also highlight a current strength of LLMs: executing brief, rule-based therapies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162985</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Fiber Coupling with Actuated Mirrors</title>
<link>https://hdl.handle.net/1721.1/162984</link>
<description>Automated Fiber Coupling with Actuated Mirrors
Vel, Vetri Senthil
Almost all atomic physics experiments rely on precise alignment of lasers. For example, optical fields are used to cool, control, and image atoms in neutral atom arrays. In this thesis, we present a design for mirrors actuated by servos that allow the precise, repeatable alignment of lasers in free space optical setups. We then apply these actuated mirrors to automate fiber coupling, where laser beams are coupled from free space into a fiber waveguide. We present the theory of fiber coupling and use experimental data on the fiber coupling landscape to develop an accurate digital twin. Insights from the combination of the digital twin and experimental data are used to develop a fast and effective algorithm for automated fiber coupling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162984</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>ACED: Automatic Concourse Event Detection</title>
<link>https://hdl.handle.net/1721.1/162983</link>
<description>ACED: Automatic Concourse Event Detection
Wagner, Luke A.
Fans of the San Antonio Spurs often face long delays when traversing the arena or waiting for food. Automatic Concourse Event Detection (ACED) is a novel system designed for tracking these statistics in the Spurs’ arena in real time. We use existing machine learning models and introduce novel processing algorithms to identify the total number of people in each section throughout the arena in addition to tracking the wait times for different restaurants and restrooms. ACED collects and stores this information in a database, which could be used to present fans with up-to-date arena information in a live dashboard to assist them in their in-game decision making. This would improve the overall fan experience, which could encourage fans to buy tickets more frequently. We provide the San Antonio Spurs with a completed implementation of ACED, which is ready to be deployed within the Spurs’ arena.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162983</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)</title>
<link>https://hdl.handle.net/1721.1/162982</link>
<description>Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)
Seeyave, Evan
The global challenge posed by pandemics, notably COVID-19, has underscored the critical need for advanced personal protective equipment (PPE). This thesis details the development and evaluation of a multi-stage powered air-purifying respirator (PAPR) incorporating direct ultraviolet-C (UVC) germicidal irradiation. The proposed PAPR aims to provide enhanced protection by actively sterilizing air through this UVC chamber immediately prior to inhalation. This approach offers an advantage over traditional filter-based PAPRs by removing both the need to replace filters and pull air with high-power motors, while still neutralizing a broad spectrum of airborne pathogens, including viruses and bacteria. The primary objective of this research is to design, construct, and test a PAPR prototype capable of achieving a high inactivation rate (target 99.9%), thereby offering a robust solution for individuals in high-exposure environments. In addition to the UVC chamber, we also built an alternate ultraviolet-A (UVA) activated titanium dioxide (TiO2) photocatalytic oxidation (PCO) chamber. This work encompasses the overall design of the system, safety considerations, and testing to quantify its pathogen inactivation efficacy and to characterize system performance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162982</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music</title>
<link>https://hdl.handle.net/1721.1/162981</link>
<description>GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music
Shi, Iris
Beatgridding is a technique meant to aid DJs in aligning the beats of two different songs. By overlaying a grid of beat markers (a “beatgrid”) on top of a waveform representation of the track being beatgridded, a song’s beats can be visualized and thus easily matched to another’s. State-of-the-art DJ software—like rekordbox by the company AlphaTheta—will algorithmically generate beatgrids for songs. However, these beatgrids are not always accurate and can often be difficult to correct with only the software-provided tools. GridFix is a desktop application designed to be an auxiliary tool for rekordbox, allowing users to correct rekordbox-generated beatgrids by providing additional functionality that rekordbox does not. GridFix’s main advantage is its ability to let users make local changes to small, isolated sections of a beatgrid, a task that is quite hard to achieve in rekordbox. GridFix is fully compatible with rekordbox and fairly easy to learn how to use, as shown by user testing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162981</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Graph Metrics for Improving Cybersecurity on Software Dependency Networks</title>
<link>https://hdl.handle.net/1721.1/162980</link>
<description>Graph Metrics for Improving Cybersecurity on Software Dependency Networks
Yao, Darren Z.
Modern software ecosystems are deeply interconnected, allowing a vulnerability in a single component to propagate and affect many others. In this thesis, we model software ecosystems as directed graphs, and apply various graph-theoretic metrics to quantify security risk. We compare two deep learning frameworks (PyTorch and TensorFlow) with two traditional software frameworks (npm and PyPI), identifying critical properties of their dependency structures, which motivates several recommendations for improving software supply chain security.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162980</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning Robotic Cutting Operations</title>
<link>https://hdl.handle.net/1721.1/162979</link>
<description>Planning Robotic Cutting Operations
Lunawat, Tarang
Classical planning and most PDDL variants operate on the assumption that the number and types of objects present in the environment are known at the time of initialization and neither can nor do change during plan execution. However, there are many domains in which it is helpful and necessary to be able to capture action (or environment) effects that are able to change the existence of objects rather than just facts about these objects. PDDLStream already provides a framework for "certifying" new facts about the environment as necessary throughout plan execution; I propose using PDDLStream to construct a principled way to reason over not just added facts, but also added or removed objects in the environment. In order to do this, I will work within the domain of cutting operations in the kitchen, as this is a domain that both necessitates a lot of object change as objects are cut and often requires chains of these generated objects to be fully reasoned over. Additionally, I will lay the groundwork to use this principled way to reason over new objects to implement different types of cutting operations in the kitchen, with the eventual goal of a robot planner being able to sequence different provided actions to more efficiently work with knives in the kitchen in a human-like manner.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162979</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets</title>
<link>https://hdl.handle.net/1721.1/162978</link>
<description>Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets
Manojkumar, Saikrishna
The direct imaging of exoplanets orbiting stars outside our solar system remains one of the crucial tools we have available to answer whether there exists life beyond Earth. The light from an Earth-like exoplanet is approximately ten orders of magnitude dimmer than its host star and hence the imaging system of the telescope observing the exoplanet must be able to suppress the starlight to achieve a “contrast” of 10−10 in the image. This is typically achieved using a coronagraph, which blocks the light from the star while allowing the light from the planet to pass through. However, some starlight that leaks through the coronagraph needs to be further removed in the search region for the exoplanet; this region is referred to as the dark hole or dark zone (DZ). Creating a DZ requires the use of focal plane wavefront sensing and control techniques, which estimates the electric field of the starlight in the focal plane of the telescope using a camera and then informs the deformable mirrors (DMs) located upstream of the coronagraph to null these electric fields. Once the DZ is created with a desired contrast, there are still slow, high-order drifts in the optical system that cause the contrast to degrade over the long observation times of the science target. High-order wavefront sensing and control (HOWFSC) techniques are required to maintain the contrast in the DZ while observing a science target. Dark Zone Maintenance (DZM) is a technique that has demonstrated the ability to maintain the contrast in the DZ over long observation times. This algorithm utilizes an Extended Kalman Filter (EKF) to estimate the open-loop electric field at every pixel in the DZ and use this information to inform the control algorithm. The achievable contrast and contrast stability of DZM are determined by several key parameters: the optical system’s drift rate, the photon flux and associated shot noise in the measurement images, and the probe magnitude applied to the DMs for the estimation algorithm. This work quantifies the impact of the drift rate, photon rate, and probe magnitude on the performance of DZM by performing a parameter scan on high-contrast imaging testbeds. The parameter scan was performed on both the in-air High-contrast imager for Complex Aperture Telescopes (HiCAT) testbed at the Space Telescope Science Institute (STScI) and the in-vacuum Decadal Survey Testbed (DST) at the Jet Propulsion Laboratory (JPL). The parameter scan was run in both simulation and on the physical testbed using the contrast in the DZ as a performance metric, and evaluated relative to the photon-noise theoretical bounds to assess the efficacy of the DZM algorithm. The substantial difference between the theoretical bounds and experimental results, on average 70 times worse on HiCAT, motivated the development and implementation of a new DZM algorithm that utilized a separate EKF to estimate the modes of wavefront error derived from the DMs and use that information to correct for the aberrations. This new modal EKF algorithm was tested with a similar parameter scan on the HiCAT simulator demonstrating a nearly 5 times level of improvement relative to the original DZM algorithm simulation performance. The results of this work will inform the design of future algorithms to maintain high contrast during observations for upcoming space telescope missions such as the Habitable Worlds Observatory (HWO).
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162978</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incentivizing Data Contributions in Decentralized Collaborative Learning</title>
<link>https://hdl.handle.net/1721.1/162977</link>
<description>Incentivizing Data Contributions in Decentralized Collaborative Learning
Wang, Yuxiao
In a collaborative learning scheme such as the federated learning model, each user benefits from the data contribution of others. Previous work shows that the federated learning protocol can incentivize users to contribute more than in the competitive equilibrium by penalizing deviations. However, a central controller with access to all the data may raise privacy concerns. In this work, we construct a decentralized collaborative protocol in which users share data without relying on a centralized controller. We then extend this protocol to a repeated game and analyze the competitive equilibrium behavior, along with strategies users can implement to foster collaboration in the repeated setting of the protocol. We provide a quantitative analysis of free-rider behavior under decentralized protocols and compare the amount of information collected with decentralized protocols against that in the centralized protocol.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162977</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams</title>
<link>https://hdl.handle.net/1721.1/162976</link>
<description>DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams
McMenamy, Josiah
This thesis aims to provide an intuitive debugging and learning tool for distributed systems that communicate by message passing. Understanding and debugging distributed systems can be challenging and slow to iterate on, so there is a need for tools that can speed up the time it takes to diagnose the root cause of a bug. There exists significant prior work in creating tools that can aid in the visualization and debugging of distributed system executions, such as the ShiViz log visualizer [13]. This work builds on top of these tools to provide more debugging information, handle large log files, and be easily instrumented in existing systems. We demonstrate using the tool to debug issues in an implementation of the Raft consensus algorithm [34].
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162976</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring</title>
<link>https://hdl.handle.net/1721.1/162975</link>
<description>Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring
Nori, Divya
Protein binder design has been transformed by hallucination-based methods that optimize structure prediction confidence metrics, such as the interface predicted TM-score (ipTM), via backpropagation. However, these metrics are imperfect proxies for binding affinity and do not reflect the statistical likelihood of a binder–target complex under the learned distribution. In this work, we propose a principled alternative: an energy-based framework that directly extracts the statistical likelihood of a predicted binder–target complex from a structure predictor’s internal confidence distributions. Building on the Joint Energy-based Modeling (JEM) framework, we introduce pTMEnergy, a statistical energy function over structures that is derived from predicted inter-residue error distributions. We incorporate pTMEnergy into BindEnergyCraft (BECraft), a hallucination-based binder design pipeline that maintains the same optimization framework as BindCraft but replaces ipTM with our energy-based objective. Across a diverse panel of challenging protein targets, BECraft achieves higher in silico success rates compared to BindCraft, RFDiffusion, and ESM3. Beyond design, we evaluate pTMEnergy as an unsupervised scoring function for retrospective virtual screening tasks. Without any task-specific supervision or retraining, pTMEnergy consistently outperforms baseline methods across both protein–protein and protein–RNA interaction benchmarks. Our results demonstrate that confidence-derived energy functions offer a powerful and generalizable signal for binder design and scoring.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162975</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays</title>
<link>https://hdl.handle.net/1721.1/162974</link>
<description>Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays
Ouko, Edwin O.
Geothermal well arrays, which organize multiple geothermal wells into carefully planned geometric configurations, provide an opportunity to enhance energy production capacity and increase fault tolerance of geothermal systems. Closed-loop geothermal systems (CLGS), a type of geothermal well design, promises to allow harnessing of geothermal energy in any location with minimal adverse environmental impact. I demonstrate how the development of these emerging geothermal technologies could be accelerated by recent advances in large language models (LLMs) in conjunction with high-level high-performance programming languages like Julia. In particular, I focus on how LLMs could be used in design brainstorming and to increase efficiency in numerical modeling. I assess the potential of state-of-the-art LLMs such as ChatGPT, Gemini, Claude, Grok, and a domain-specific model, AskGDR, as expert assistants in geothermal research. Owing to the unpredictable reliability of LLMs, there is a constant need for objective evaluation benchmarks in various domains. I propose a novel approach, leveraging Google’s recently introduced AI tool, NotebookLM, to accelerate the generation of quantitative geothermal benchmarks with only new unpublished questions. In addition, I propose the use of blackbox optimization as a computationally less costly alternative to approximate the optimal configuration of CLGS wells in a geothermal array to minimize thermal interference and improve heat energy production. I evaluate several optimization strategies such as Bayesian optimization, particle swarm optimization, natural evolution strategies, differential evolution optimization, Nelder-Mead, and simulated annealing on various performance characteristics such as convergence speed and highest production capacity attained.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162974</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis</title>
<link>https://hdl.handle.net/1721.1/162973</link>
<description>A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis
Medearis, Nicholas A.
The human microbiome plays a crucial role in maintaining our health. Alterations in the microbiome have been linked to various chronic conditions like autoimmune disorders, metabolic diseases, and cancer. While various tools have been developed to study the microbiome, each tool tends to be specialized for a specific task. To overcome this limitation, we report on the development of a foundation model pretrained on 13,524 human microbiome metagenomic samples. The model was then fine-tuned to predict the clinical status of the host. Our model was able to differentiate between healthy and diseased samples in 10-fold cross-validation on the training dataset with an accuracy of 83.7%. On an external validation dataset of 927 samples, our model had an accuracy of 74.9%. Notably, our model performed even better at differentiating diseases from one another. On the diseased samples in the training dataset, it classified samples with an accuracy of 93.3% in 10-fold cross-validation. Together, our results show that generative AI has the potential to transform microbiome research and advance personalized medicine.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162973</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner</title>
<link>https://hdl.handle.net/1721.1/162972</link>
<description>Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner
Mueller, David
Investment in automation by small and medium-sized enterprise (SME) manufacturers in the United States has lagged behind their larger counterparts for decades, despite comprising a majority of the nation’s manufacturing industry. The cyber-physical production systems (CPPSs) introduced by Industry 4.0 promise to bolster productivity and efficiency, but only for those enterprises which invest in constituent technologies. These technologies are not easily integrated in existing factories, typically requiring installation of invasive infrastructure and continuous technical support. Robotic integration is typically performed by specialized third-party firms or by in-house staff with extensive technical training, such as engineers. SME manufacturers are particularly sensitive to the complexities of robot integration due to limited access to technologists, and their need for frequent reconfiguration under economies of scope. This thesis introduces Marve: the Mobile Augmented Reality Visual Editor. Marve is a proof-of-concept Android application that enables line workers to directly configure and control an autonomous mobile robot (AMR)-backed hybrid intralogistics system using lowcost consumer hardware. Workers can use Marve’s augmented reality (AR)-based interface to define and visualize the essential geometry and components of such a system. Once configured, workers are able to simulate how the system would respond to their requests to move material throughout the factory. The use of AR enables extensive work to be done at the planning stage of CPPS integration by line workers themselves, bypassing the need for modeling by engineers. Marve relies exclusively on fiducials and visual-inertial odometry (VIO) for localization, and fiducial tags for object tracking, thus eliminating the need for supporting infrastructure. Taken together, these features make Marve an easy on-ramp for SMEs seeking to transition legacy production lines into the CPPSs of Industry 4.0.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162972</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling</title>
<link>https://hdl.handle.net/1721.1/162971</link>
<description>Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling
Liu, Katie
Machine learning inference in multi-tenant cloud environments leads to significant challenges when it comes to minimizing latency and resource contention, especially as models grow in size and complexity. This thesis addresses the cold start overhead and scheduling inefficiencies of multi-tenant ML serving by integrating the RayServe distributed model-serving framework into σOS, a cloud operating system that unifies container and serverless paradigms. The thesis also proposes two model-aware schedulers within σOS that intelligently routes inference requests to reduce the number of cold starts: Model Colocation, which prioritizes placing requests on machines where the required model is already loaded, and Centralized Model Registry, which tracks globally available models to inform scheduling decisions. These policies proactively reduce model load times by reusing cached models. Experimental results on language translation workloads in an 8-node cluster show that these schedulers achieve a ≈ 50% reduction in average inference latency and eliminates roughly 4–5 cold starts per workload, compared to σOS’s default scheduler. Through this model-aware approach to scheduling, our work enables more efficient, scalable, and low-latency ML inference serving in multi-tenant cloud settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162971</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality</title>
<link>https://hdl.handle.net/1721.1/162970</link>
<description>Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality
Shukla, Aditeya
The impacts of commercial aviation on global climate and air quality have led to an industry-wide movement to reduce its environmental impact. While technological developments in aircraft propulsion, materials, and aerodynamics aim to reduce fuel consumption and CO₂ emissions, these efforts often overlook the full climate and air quality impacts of aviation, especially emissions impacts of NOₓ, CO, HC, soot, and contrails. This study assesses the environmental constraints associated with advancements driven by fuel efficiency by modeling aircraft technologies across narrow-body, wide-body, and regional jet categories. By focusing on near-future technology insertions in materials, aerodynamics, and propulsion, we can compute quantifiable environmental metrics such as temperature changes, global warming potentials, and monetized environmental damages. Our modeling shows that certain propulsion technologies — such as increased component polytropic efficiencies or higher allowable turbine-metal temperatures — can reduce fuel consumption by more than 10% under favorable re-optimizations of engine design. However, they often raise engine core pressures or temperatures in ways that increase NOₓ emissions indices by more than 30%. This can lead to worse air quality damages, offsetting some of the CO₂ savings and in some cases result in a 2% increase in environmental damages on a total net present value (NPV) basis. Primary structure material upgrades consistently reduce both fuel burn and NOₓ emissions. These improvements in air quality from reduced NOₓ result in a 10% reduction of the total NPV from environmental impacts. This analysis shows that focusing on fuel efficiency alone can be an incomplete metric towards understanding the environmental impact of an aircraft. By offering a quantitative assessment of how near-future upgrades can affect both climate and air quality, this study also provides guidance on which technology paths are most effective in reducing the overall environmental impact of aviation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162970</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization</title>
<link>https://hdl.handle.net/1721.1/162968</link>
<description>Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization
Xu, Jessica J.
Neurodegenerative diseases, such as Alzheimer’s, impact many people worldwide and currently have no cure, making early detection essential for effective symptom management and intervention. Traditional diagnostic practices often rely on subjective clinical evaluations that can vary between practitioners, highlighting the need for more objective methods. The digital Symbol Digit Test (dSDT), administered via the Cognitive Health App on an iPad and using the ETVision Eye Tracking System, aims to provide an automated, reliable method to analyze patient cognitive function to detect early signs of impairment through capturing handwriting and gaze data. This thesis builds upon previous work by automating the synchronization of these two data modalities, refining definitions of learning behaviors, and developing pipelines for data processing and visualization. By creating a synchronized multimodal dataset, we can visualize participant behavior for more intuitive interpretation and draw meaningful conclusions. These contributions provide an end-to-end framework for analyzing behavior during the cognitive assessment and lay the groundwork for future development of diagnostic models to detect early signs of neurodegenerative diseases.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162968</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Location Verification for Spoofing Detection in Non-Terrestrial Networks</title>
<link>https://hdl.handle.net/1721.1/162967</link>
<description>Location Verification for Spoofing Detection in Non-Terrestrial Networks
Schatz, Ensign Nathan Caleb
Reliable location awareness is essential for the development of new services and applications in non-terrestrial networks (NTN). The ability of malicious users to report false location information poses a significant threat to NTN performance. This threat introduces the need for a flexible and robust location verification system (LVS) that can reliably detect malicious users. This paper proposes a single-satellite LVS based on round-trip time and angle-of-arrival measurements. We characterize several sources of uncertainty unique to the NTN scenario and examine their combined effect on positioning error. To detect spoofing probabilistically, we approximate the likelihood function for the unknown user position using a Gaussian mixture model and employ a likelihood ratio decision rule for location verification. Results display receiver operating characteristic curves to evaluate the LVS performance under various satellite ephemeris error conditions, spoofing distances, number of measurements available to the system, and wireless channel properties. The proposed LVS is shown to reliably detect spoofing among malicious users.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162967</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Triangle Splatting</title>
<link>https://hdl.handle.net/1721.1/162966</link>
<description>Triangle Splatting
Xu, Daniel
We develop a differentiable rendering method for recovering 3D meshes of scenes from 2D images. Unlike existing approaches, our method does not rely on a differentiable renderers and is compatible with any standard mesh rasterizer. To our knowledge, it is the first mesh-based differentiable rendering method that is not reliant the use of visibility masks entirely. Beyond these conceptual advancements, we implemented a set of highly optimized kernels that enable efficient scene representation on a sparse voxel grid, effectively overcoming the cubic scaling bottleneck faced by similar methods. These innovations result in promising performance on unbounded real-world scenes with complex backgrounds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162966</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing</title>
<link>https://hdl.handle.net/1721.1/162965</link>
<description>Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing
Ortiz, Ciarra Celena
Entering a microgravity environment induces cephalad fluid shifts that can lead to cardiovascular and renal-hormonal adaptations that can effect astronaut health and performance in space. The current monitoring strategies for fluid shift lack the ability to track regional fluid shift in real-time, which limits countermeasure efficacy. This thesis aims to highlight the investigation and validation of using prototype non-invasive radiofrequency (RF) sensors for regional fluid shift detection. Additionally, the integration of the feedback from these sensors into Lower Body Negative Pressure (LBNP) chambers could allow for the development of an adaptive Lower Body Negative Pressure regulation framework. Coaxial RF sensors were designed and characterized using tissue phantoms, and tested in a human subject study involving controlled LBNP exposure. Reflection coefficients (S₁₁ and S₂₂) were analyzed to detect regional fluid changes in arm and leg tissue. The preliminary results indicated a statistically significant decrease in the arm reflection coefficients (S₁₁) during active LBNP, which is consistent with fluid being pulled towards the lower body. The leg reflection coefficients (S₂₂) were more variable and did not exhibit statistically significant results, suggesting a need for more investigation with placement and sensor sensitivity. This work demonstrates the potential of using wearable RF sensors for non-invasive fluid shift monitoring and lays the foundation for integrating fluid sensor feedback into adaptive LBNP control protocols to improve astronaut health monitoring and countermeasure personalization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162965</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Inference via Optimal Transport Ambiguity Sets</title>
<link>https://hdl.handle.net/1721.1/162964</link>
<description>Robust Inference via Optimal Transport Ambiguity Sets
Wang, Zheyu
Uncertainty quantification is pivotal for ensuring the safety and reliability of predictive algorithms in high-stakes applications—ranging from cancer diagnosis to autonomous driving. This challenge is exacerbated by distribution shift, in which the true data–generating distribution diverges from the nominal distribution on which our statistical methods were trained. In this thesis, we formalize distribution shifts via ambiguity sets—metric neighborhoods in the space of probability measures defined by distances such as the Wasserstein metric—and demonstrate that leveraging these ambiguity sets endows two widely used statistical algorithms with distributional robustness. The Kalman filter enables accurate, real-time tracking of latent states by assimilating noisy, indirect measurements over time. Its performance relies on precise state-space models for both the evolution dynamics and the observation process. In practice, uncertainties in these models introduce errors that can significantly degrade filter accuracy. Here, we review two robust Kalman-filter variants that explicitly account for such errors via Wasserstein ambiguity sets. Split conformal prediction, hereafter referred to as conformal prediction, offers a powerful framework for quantifying predictive uncertainty by constructing prediction intervals with finite-sample, distribution-free guarantees. Despite its widespread success, ensuring its validity under train-test distribution shifts remains a significant challenge. We model distribution shifts using ambiguity sets defined by two optimal transport-based metrics and propose two robust conformal prediction algorithms that preserves validity under these shifts. First, we consider ambiguity sets defined by a pseudo-divergence derived from the LévyProkhorov (LP) metric, which captures both local and global data perturbations. We provide a self-contained overview of LP ambiguity sets and their connections to widely used metrics such as the Wasserstein and Total Variation distances. We then establish a natural link between conformal prediction and LP ambiguity sets: by propagating the LP ambiguity set through the scoring function, we reduce complex high-dimensional distribution shifts to manageable one-dimensional shifts, enabling exact computation of the worst-case quantile and coverage. Building on this foundation, we develop valid robust conformal prediction intervals under distribution shifts, explicitly relating LP parameters to interval width and confidence levels. Experimental results on real-world datasets demonstrate the effectiveness of the proposed approach. Next, we extend our analysis to robust conformal prediction over Wasserstein-2 ambiguity sets, deriving a theoretical characterization of the worst-case quantile. However, we identify intractability due to the dependence on the shape of the original score CDF and conclude with potential future directions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162964</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Limits of Quantum Ranging</title>
<link>https://hdl.handle.net/1721.1/162963</link>
<description>Theoretical Limits of Quantum Ranging
Kartal, Bünyamin
The ability to determine distances from dedicated measurements, namely active ranging, is crucial in a variety of systems including localization, radar, and lidar. This thesis establishes the quantum limits and determines the quantum advantage provided by single-beam displaced squeezed states in active ranging. Analytical expressions of the quantum Fisher information (QFI) are provided for monochromatic and continuous-mode waves passing through a thermal loss channel with arbitrary loss and noise conditions. The optimal allocation of system resources for performing displacement and squeezing operations is determined. The optimal allocation consists of apportioning all resources to perform either the displacement operation, providing no quantum advantage, or the squeezing operation. Analytical results are examined in optical and microwave regimes. The optimal gain, i.e., the ratio between the QFI obtained by optimal resource allocation and the QFI obtained by performing only the displacement operation, is derived for the optical and microwave regimes. Quantum advantage afforded by the prototypical heterodyne receiver is also investigated. The results of this thesis pave the way for establishing a foundation of active ranging and provide insights for system design employing currently available quantum technologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162963</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalized Policy Learning with Planning</title>
<link>https://hdl.handle.net/1721.1/162962</link>
<description>Generalized Policy Learning with Planning
Yang, Ryan P.
Generalized policy learning seeks to find policies that solve multiple tasks within a planning domain. We introduce methods to search for policies independently in a domain from empty initialized policies. As an extension, we also propose a problem setting to learn satisficing policies between domains. In an independent domain, we propose a score function to guide the policy search. Our approach, Policy-Guided Planning for Generalized Policy Generation (PG3), evaluates policies based on how well it can be used to plan. Empirically, we show that PG3 allows generalized policy learning to occur more efficiently than other baselines with PDDL-based problems and policies represented as lifted decision lists. Finally, our experiments show that policies independently learned are qualitiatively similar, prompting further investigation on the possibilities of further accelerating the policy search process.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162962</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-Learning Exploration Strategies with Decision Transformers</title>
<link>https://hdl.handle.net/1721.1/162961</link>
<description>Meta-Learning Exploration Strategies with Decision Transformers
Welch, Ryan
The problem of pure exploration in sequential decision-making is to identify strategies for efficiently gathering information to uncover hidden properties of an environment. This challenge arises in many practical domains, including clinical diagnostics, recommender systems, and educational testing, where data collection is costly and the effectiveness of exploration is critical. Efficient exploration in these contexts strongly depends on exploiting underlying structural relationships within the environment. For instance, recognizing that multiple medical tests may provide overlapping information can reduce the number of tests required to make a diagnosis. Existing exploration approaches drawn from reinforcement learning and active hypothesis testing typically rely on heuristic strategies that require explicit prior assumptions about such structural information. However, when this information is unknown, heuristic methods often lead to redundant exploration, significantly limiting their practical utility in high-stakes domains. Furthermore, these existing approaches do not leverage past experience to improve their exploration efficiency over time. To overcome these limitations, we introduce In-Context Pure Exploration (ICPE), a novel meta-learning framework capable of autonomously discovering and exploiting latent environmental structures across related tasks to guide efficient exploration. ICPE leverages the in-context learning and sequence-modeling capabilities of transformers, combined with supervised learning and deep reinforcement learning techniques to learn exploration strategies directly from experience. Through extensive experiments on synthetic and semi-synthetic exploration tasks, we demonstrate that ICPE is able to efficiently explore in deterministic, stochastic and highly structured environments without relying on any explicit inductive biases. Our results highlight the potential of ICPE to enable more practical exploration strategies suitable for real-world decision-making contexts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162961</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments</title>
<link>https://hdl.handle.net/1721.1/162960</link>
<description>WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments
Thirumalai, Vittal
Autonomous agents operating in real-world environments must make decisions under uncertainty, facing challenges such as partial observability, sparse rewards, and long-horizon planning. While reinforcement learning (RL) enables agents to learn from experience, standard policies often struggle to generalize in the presence of ambiguous tasks or incomplete information. Large language models (LLMs) can provide valuable semantic guidance, but their high computational cost and latency make constant querying impractical. This thesis introduces WhatWhen2Ask, a framework for cost-aware, confidence-driven querying of external multimodal large language models (MLLMs). The agent employs a Deep Q-Network (DQN) as its internal action planner, selectively querying open- and closed-source models (BLIP-2 and GPT-4o) in a hierarchical manner when its confidence is low and external guidance is likely to improve performance. Accepted hints are embedded and fused with structured state representations, supported by tailored reward shaping for improved learning in sparse environments. Evaluated in the HomeGrid environment, WhatWhen2Ask improves the success rate from 38% (DQN-only) to 54%, while querying in fewer than 6% of steps. Ablation studies show that semantic hints, confidence-based querying, selective hint filtering, and hierarchical fallback each contribute meaningfully to performance. These results suggest that principled, confidence-aware LLM querying can enhance decision-making in uncertain environments, offering a step toward more efficient and cost-aware language-augmented agents.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162960</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach</title>
<link>https://hdl.handle.net/1721.1/162958</link>
<description>Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach
Liu, Katherine
With the high volume of activity flowing through financial institutions, detecting potential errors remains a critical challenge. This paper addresses two key areas where errors may occur: business name registrations and transactions within valid accounts. Traditional string-matching methods struggle to accurately identify incorrectly written business names that closely resemble existing ones, while existing error detection models for transaction data often suffer from class imbalance, leading to reduced performance on minority incorrect transaction cases. To address these issues, this paper proposes two novel approaches. First, a hybrid method integrating multi-agent Large Language Models (LLMs) with existing string-matching techniques enhances the detection of incorrect business names by capturing subtle variations beyond conventional edit-distance metrics, improving the recall from 0.815 for the baseline model to 0.987 using the proposed method. Second, an improved tabular data generation method for credit card transactions is introduced, leveraging LLMs and class balancing to generate high-quality synthetic data. Using this data to train error detection systems results in a decrease of the false negative rate from 23.47% to 12.84%. Together, these methods enhance the performance of error detection systems, enabling financial institutions to enhance the experiences of their clients.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162958</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction</title>
<link>https://hdl.handle.net/1721.1/162957</link>
<description>Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction
Su, Arnold C.
In clinical settings, timely and accurate prediction of adverse patient outcomes can help guide treatment decisions. While deep learning models such as LSTMs have demonstrated strong predictive performance on multivariate clinical time series, they often lack interpretability. To address this gap, this thesis proposes a framework that combines the predictive strength of neural networks with the interpretability of latent variable models. Specifically, we develop a constrained inference approach to train a switching state space model—an autoregressive hidden Markov model (AR-HMM)—for outcome prediction. Our method leverages knowledge distillation: a high-capacity LSTM "teacher" model is first trained to predict a target clinical outcome of interest, and its predictive behavior is then transferred to an interpretable AR-HMM "student" model through a similarity constraint during inference. We implement a constrained variational inference approach to estimate the parameters of the student model while aligning its latent representations with that of the teacher model’s. We evaluated our approach using two real-world clinical datasets. Our approach demonstrates predictive performance comparable to state-of-the-art deep learning models, while producing interpretable latent trajectories that reflect clinically meaningful patient states.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162957</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Duality, Weight Decay, and Metrized Deep Learning</title>
<link>https://hdl.handle.net/1721.1/162956</link>
<description>Duality, Weight Decay, and Metrized Deep Learning
Newhouse, Laker
The Muon optimizer has shown convincing evidence that it is faster and more scalable than AdamW for deep learning training, setting speed records for training NanoGPT and scaling up to models with 16B parameters. The theory that led to Muon is called metrized deep learning, a method that suggests assigning norms to each part of a neural network. Chapter 1 begins with an accessible explanation of metrized deep learning, including one of its recurring tools: odd polynomial iterations that act directly on singular values. Chapter 2 reviews duality, a way to modify the gradient that seeks to decrease the loss the most while disturbing the model the least. Pedagogically, duality links four popular optimizers—SGD, Adam, Shampoo, and Muon—under a common framework, steepest descent under a norm. Practically, experiments suggest that duality-based optimizers train faster than AdamW and transfer learning rate across width. Chapter 3 develops tools to enforce weight norm constraints during training, conferring provable and upfront Lipschitz guarantees for transformers. We find that optimizer dynamics matter: switching from AdamW to Muon improves standard weight regularization methods—weight decay and spectral normalization—allowing models to reach equal performance with a lower Lipschitz bound. Leveraging that Muon’s update has a fixed spectral norm, we co-design a weight constraint method called spectral cap that improves the Lipschitz vs. performance tradeoff for MLPs and 2M parameter transformers. Our 4-Lipschitz transformer on Shakespeare text reaches validation accuracy 60%. Scaling to 145M parameters, our 600-Lipschitz transformer reaches 21% accuracy on internet text. However, to match the NanoGPT baseline validation accuracy of 39.4%, our Lipschitz upper bound increases to 10^274. Nonetheless, our Lipschitz transformers train without stability measures such as layer norm, QK norm, and tanh logit softcapping.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162956</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation</title>
<link>https://hdl.handle.net/1721.1/162955</link>
<description>Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation
Zhao, Sarah Ann
Uncertainty in nucleotide sequences is widespread in bioinformatics, arising from somatic mutations, population-level variation, sequencing errors, and ancestral state inference. Yet, standard formats like FASTA encode DNA deterministically using ASCII string characters, omitting this uncertainty and contributing to pervasive reference biases in genomics. Graph pangenomes have recently emerged to address these limitations by representing genetic variation across populations as bidirected graphs. While promising, these approaches are still developing and are not yet fully integrated with widely used linearly-referenced genomic tools and databases. To bridge this gap, I introduce pDNA (probabilistic DNA), a linearly-referenced data structure that encodes nucleotide-level uncertainty in a vector format compatible with traditional genomics workflows. Each position in a pDNA sequence is represented as a 4-dimension probability vector over the four possible DNA nucleotides, inspired by position weight matrices and one-hot encodings. I also introduce pFASTA, a binary file format for efficient storage of pDNA sequences, along with an open-source software package for generating, manipulating, and analyzing these data. This framework enables uncertainty-aware sequence analysis while maintaining compatibility with existing genomics infrastructure. I apply this framework to ancestral sequence reconstruction.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162955</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online Acquisition of Simulatable Rigid Object Models</title>
<link>https://hdl.handle.net/1721.1/162954</link>
<description>Online Acquisition of Simulatable Rigid Object Models
Yang, Ethan
How can we build a robot that operates autonomously in a home environment over long periods of time? A key requirement is the ability to perceive and understand its surroundings, including the objects it will interact with. This thesis investigates how a robot can reconstruct previously unknown objects and integrate them into a physics simulation for planning. We explore two methods for reconstructing the 3D geometry of objects and test their performance in simulation and in real-world experiments. Our results demonstrate that a learned depth model enables 3D reconstruction of unknown objects and their successful integration into simulation environments. Additionally, we investigate methods for estimating an object’s inertial parameters, using its reconstructed mesh and through manipulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162954</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling contrastive learning batch size by two orders of magnitude</title>
<link>https://hdl.handle.net/1721.1/162953</link>
<description>Scaling contrastive learning batch size by two orders of magnitude
Tian, Betsy
Contrastive learning has emerged as a powerful framework for unsupervised representation learning, allowing models to learn by maximizing agreement between related samples and distinguishing dissimilar ones. However, contrastive learning frameworks are fundamentally limited by the number of negative pairs a model can observe, and memory-intensive backbones constrain practical batch sizes. We introduce a three-phase, adapter-augmented training framework that scales contrastive batch sizes by two orders of magnitude – surpassing previous state-of-the-art learners in both accuracy and speed. First, we co-train the backbone and adapter on small batches to establish a strong initialization. Next, we freeze the backbone and train the adapter alone with very large batches, exposing it to an enlarged negative pool. Finally, we transfer large-batch adapter gradients back into the backbone via segmented backpropagation. We evaluate our method on the PlacesAudio dataset and show promising results for boosting retrieval performance at each phase. By exposing the model to substantially more negatives per effective batch, we achieve higher accuracy at a faster speed than optimizer-stepping baselines. Ultimately, this approach that scales batch size by hundreds of times can be integrated into any contrastive learning framework for more robust representation learning and abundant negative sampling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162953</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography</title>
<link>https://hdl.handle.net/1721.1/162952</link>
<description>Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography
Rubel, Evan
Early detection of lung cancer significantly improves patient outcomes, and tracking the growth of lung nodules over time is key to understanding their progression and informing future treatment decisions. However, calculating nodule growth in computed tomography (CT) scans remains a highly manual and time-consuming task. In this work, we develop an automated end-to-end pipeline to compute lung nodule growth using state-of-the-art computer vision techniques. While modern advances in deep learning have all but solved many learning tasks in the domain of natural images, biomedical imaging presents unique challenges due to limited data availability, inconsistent annotations, and deployment constraints. We address these challenges by training robust detection and segmentation models using the LUNA16 and LNDb datasets. On the held-out UniToChest dataset, our methods generalize well, attaining a nodule recall of 77.49%, reducing false positives per scan by a factor of 11.3 compared to existing techniques, and achieving a mean nodule-wise Dice score of 0.6453. We then apply our methods to analyze nodule growth in 1,378 patients from the National Lung Screening Trial; we estimate a median nodule volume-doubling time of 791.23 days across all nodules from the patients that do not receive a cancer diagnosis and a median nodule volume-doubling time of 637.38 days across all nodules from the patients that do receive a cancer diagnosis. We also recall 82.20% of radiologist-annotated nodules that are directly associated with a cancer diagnosis and estimate a shorter median nodule volume-doubling time of 370.11 days for these nodules. By automating lung nodule growth quantification, this work lays the foundation for improved screening protocols, personalized treatment planning, and the development of novel imaging biomarkers. To encourage further work in this area, we release our full software pipeline at https://github.com/evanrubel/nodule_volumes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162952</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator</title>
<link>https://hdl.handle.net/1721.1/162951</link>
<description>Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator
Louie, Tiffany
This work studies a high frequency, low phase noise, hybrid CMOS oscillator based on a cylindrical dielectric resonator coupled directly to an on chip structure. Dielectric resonators (DR) are known for their high quality factor, low cost, and high temperature stability which makes them a desirable frequency selecting element in design for millimeter-wave (mmWave) applications. Current dielectric resonator oscillators (DRO) have proven to be phase stable, but are limited in frequency (&lt; 40Ghz) due to their implementation with discrete components. However, in increasing the operational frequency up to the GHz range, it is possible to reduce size of the DR and place it directly on top of a cmos chip. We demonstrate, using a 22nm FD-SOI process, the design of a 80Ghz DRO with an area of 4mm² and an oscillator power consumption of 1.95mW. The DRO achieves a simulated phase noise of -128 dBc/Hz at 1MHz and -148 dBc/Hz at 10MHz.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162951</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>LEO: an LLM-Powered EDA Overview</title>
<link>https://hdl.handle.net/1721.1/162950</link>
<description>LEO: an LLM-Powered EDA Overview
Zheng, Sophia
Computational notebooks impose a linear structure that impedes data analysts’ sensemaking process with overwritten cells, dead-end code, and fragmented logic. This challenge is especially pronounced when analysts either encounter a notebook authored by someone else or revisit a self-authored notebook after significant time has passed. In both cases, understanding the analysis code becomes convoluted and laborious. To address these barriers, we introduce LEO, a computational notebook tool that operationalizes notebook summarization by leveraging large language models to (1) cluster analysis patterns and (2) trace variable use. LEO organizes code into a two-level hierarchy–General Level Sections and Code Level Actions—integrated with in-line textual summaries filtered on the variable-level, further supporting task-driven exploration. We evaluate the system’s effectiveness in a user study with five computational notebook users across two realistic use cases. Participants reported that LEO streamlined code comprehension and navigation of undocumented notebooks by allowing them to query variables and traverse code cells with greater ease.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162950</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Articulated 3D Scene Graphs from Egocentric Vision</title>
<link>https://hdl.handle.net/1721.1/162949</link>
<description>Articulated 3D Scene Graphs from Egocentric Vision
Yu, Alan
Robotic mapping systems typically approach building metric-semantic scene representations from the robot’s own sensors and cameras. However, these “first person” maps inherit the robot’s own limitations due to its embodiment or skillset, which may leave many aspects of the environment unexplored. For example, the robot might not be able to open drawers or access wall cabinets. In this sense, the scene graph is not as complete, and requires a more capable robot to fill in the gaps by remapping. We narrow these blind spots in current methods by leveraging egocentric data captured as a human naturally explores a scene wearing Project Aria glasses, giving a way to directly transfer knowledge about articulation from the human to any deployable robot. We demonstrate that, by using simple heuristics, we can leverage egocentric data to recover models of articulate object parts, with quality comparable to those of state-of-the-art methods based on other input modalities. We also show how to integrate these models into 3D scene graph representations, leading to a better understanding of object dynamics and object-container relationships. We finally demonstrate that these articulated 3D scene graphs enhance a robot’s ability to perform mobile manipulation tasks, showcasing an application where a Boston Dynamics Spot is tasked with retrieving concealed target items, given only the 3D scene graph as input.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162949</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles</title>
<link>https://hdl.handle.net/1721.1/162948</link>
<description>Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles
Strømstad, Filip Traasdahl
Multi-agent systems have seen a significant rise in research interest, enabled by the increasing availability of low-cost autonomous platforms and motivated by a wide range of emerging applications. However, the coordinated deployment of large numbers of autonomous vehicles in marine environments remains a nontrivial and high-risk problem, yet it is often overlooked in the literature. These vehicles are typically deployed from a single location, and their underactuated nature, close proximity, and susceptibility to external disturbances make it difficult to achieve a mission-ready configuration without collisions. In this thesis, we address the problem of transitioning a set of underactuated Autonomous Surface Vehicles (ASVs) from arbitrary and inconvenient initial conditions to a deconflicted set of deployed vehicles. We propose a decentralized and scalable method that calculates and assigns target positions to the vehicles, generates optimal paths that comply with minimum turning radius constraints, and ensures collision avoidance between the vehicles through a shared speed policy. Contributions also include a formal definition and quantification of clustering and declustering in multi-agent systems. The approach is implemented using the MOOS-IvP autonomy framework, and performance is evaluated through simulation with up to \(64\) vehicles and extensive field trials with eight vehicles. Results demonstrate that our approach reduces the time to decluster for the most challenging initial conditions by 50% compared to the current manual method. By improving efficiency and robustness while eliminating human involvement, this work streamlines ASV fleet deployments, enabling more scalable multi-agent field operations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162948</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>DBOS Advanced Network Analysis Capability for Collaborative Awareness</title>
<link>https://hdl.handle.net/1721.1/162947</link>
<description>DBOS Advanced Network Analysis Capability for Collaborative Awareness
Lockton, Sophia E.
Collaborative cyber defense is an essential strategy for detecting and mitigating cyber threats [1]. As traditional intrusion detection systems struggle against increasingly sophisticated attacks, we propose embedding collaborative cyber defense directly into system infrastructure. This work presents a novel implementation of collaborative awareness within DBOS (a Database-Oriented Operating System), resulting in a platform that significantly accelerates application development while providing built-in security for transactional web services. By treating security as a first-class operating system service, our approach facilitates real-time comprehensive network observation and analysis without the need for external tools. The implementation supports the construction, aggregation, and analysis of traffic matrices using both Python and PostgreSQL-based workflows. These workflows extract and process IP-level metadata from DBOS applications, enabling multi-instance aggregation and analysis of network data. This integration represents the first instance of collaborative network analysis within an operating system runtime, demonstrating that secure-by-default infrastructure is both feasible and performant.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162947</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minding the Politeness Gap in Cross-cultural Communication</title>
<link>https://hdl.handle.net/1721.1/162946</link>
<description>Minding the Politeness Gap in Cross-cultural Communication
Machino, Yuka
Misunderstandings in cross-cultural communication often arise from subtle differences in interpretation, but it is unclear whether these differences arise from the literal meanings assigned to words or from more general pragmatic factors such as norms around politeness and brevity. In this paper, we report three experiments examining how speakers of British and American English interpret intensifiers like “quite” and “very,” finding support for a combination of semantic and pragmatic factors. To better understand these differences, we developed a computational cognitive model where listeners recursively reason about speakers who balance informativity, politeness, and utterance cost. A series of model comparisons suggest that cross-cultural differences in intensifier interpretation stem from (1) different literal meanings, (2) different weights on utterance cost. These findings challenge accounts based purely on semantic variation or politeness norms, demonstrating that cross-cultural differences in interpretation emerge from an intricate interplay between the two.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162946</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting</title>
<link>https://hdl.handle.net/1721.1/162945</link>
<description>Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting
Senthil, Swathi
This thesis investigates the predictive capabilities of neural networks in financial time series forecasting, focusing on predicting the weekly close price of the SPY index. We explore the integration of options-derived features alongside traditional price data, compare recurrent architectures and transformer-based models, and evaluate multiple training strategies. Our key contributions include: (1) evidence that options-derived input features improve both error metrics and directional accuracy; (2) a comparison study of four training methods (one-step-ahead, direct multi-step, simulation error, and teacher-forcing); (3) the development of a bidirectional GRU-LSTM hybrid model that outperforms standard recurrent networks in multi-step forecasting; and (4) a novel coarse tokenization approach for discretizing continuous financial data, which improves first-week prediction performance when used in transformer models that use an asymmetric attention mechanism. Overall, this thesis illustrates the importance of input design, model architecture, and training methodology in neural financial forecasting. We conclude by outlining directions for future work, including cross-asset generalization and further exploration of tokenization schemes for transformer-based models.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162945</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating LLM Hallucination in the Banking Domain</title>
<link>https://hdl.handle.net/1721.1/162944</link>
<description>Mitigating LLM Hallucination in the Banking Domain
Sert, Deniz Bilge
Large Language Models (LLMs) offer significant potential in the banking sector, particularly for applications such as fraud detection, credit approval, and enhancing customer experience. However, their tendency to "hallucinate"—generating plausible but inaccurate information—poses a critical challenge. This thesis examines existing strategies for mitigating LLM hallucinations and proposes a novel approach to reduce hallucinations in the context of predicting customer churn using LLMs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162944</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layered Unlearning for Adversarial Relearning</title>
<link>https://hdl.handle.net/1721.1/162943</link>
<description>Layered Unlearning for Adversarial Relearning
Qian, Timothy
Our goal is to understand how post-training methods, such as fine-tuning, alignment, and unlearning, modify language model behavior and representations. We are particularly interested in the brittle nature of these modifications that makes them easy to bypass through prompt engineering or relearning. Recent results suggest that post-training induces shallow contextdependent “circuits” that suppress specific response patterns. This could be one explanation for the brittleness of post-training. To test this hypothesis, we design an unlearning algorithm, Layered Unlearning (LU), that creates distinct inhibitory mechanisms for a growing subset of the data. By unlearning the first &#119894; folds while retaining the remaining &#119896; − &#119894; at the &#119894;th of &#119896; stages, LU limits the ability of relearning on a subset of data to recover the full dataset. We evaluate LU through a combination of synthetic and large language model (LLM) experiments. We find that LU improves robustness to adversarial relearning for several different unlearning methods. Our results contribute to the state-of-the-art of machine unlearning and provide insight into the effect of post-training updates.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162943</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks</title>
<link>https://hdl.handle.net/1721.1/162942</link>
<description>Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks
Qian, Janet
Bayesian optimization (BO) is a powerful framework for optimizing expensive blackbox functions, widely used in domains such as materials science, engineering design, and hyperparameter tuning. Traditional BO relies on Gaussian processes (GPs) as surrogate models, but GPs face limitations in flexibility and scalability. Prior-Data Fitted Networks (PFNs) have recently emerged as a promising alternative, leveraging transformer architectures and in-context learning to approximate posterior predictive distributions (PPDs) in a single forward pass. By training on large amounts of synthetically generated data from sample-able function priors, PFNs can learn to rapidly predict PPDs across a wide range of function classes. In this thesis, we investigate the application of PFNs to mixed-variable BO, a particularly challenging setting due to the interplay between continuous and discrete inputs and the combinatorial complexity of the search space. We evaluate how PFNs perform when integrated with a range of mixed-variable BO strategies, including various encoding schemes and discrete-aware acquisition optimization. Additionally, we explore how finetuning PFNs on targeted function priors can enhance performance when prior knowledge about the objective is available. Our contributions include empirical evaluations of mixed-BO techniques, insights into PFN training, and a suite of mixed-variable benchmark problems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162942</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering</title>
<link>https://hdl.handle.net/1721.1/162941</link>
<description>Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering
Ravuri, Chaitanya
Modern code–generation LLMs can already solve a large fraction of programming problems, yet they still hallucinate subtle bugs that make their outputs unsafe for autonomous deployment. We present functional clustering, a black-box wrapper that eliminates nearly all hallucination-induced errors while providing a tunable confidence score. The wrapper samples many candidate programs, executes each on a self-generated test suite, and clusters candidates whose I/O behavior is identical; the empirical mass of the largest cluster serves as an exact confidence estimate. A single scalar threshold on this estimate lets users trade coverage for reliability with exponential guarantees. On LiveCodeBench our verifier preserves baseline pass@1 on solvable tasks yet slashes the error rate of returned answers from ∼65% to 2%, and drives it to 0% at a conservative threshold while still answering 15.6% of prompts. Manual audits show that the few residual mistakes stem from prompt misinterpretation, not random generation noise, narrowing future work to specification clarity. Because the method requires only sampling and sandbox execution, it applies unchanged to closed-source APIs and future models, offering a practical path toward dependable, autonomous code generation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162941</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Choosing Networks for Ride-Hailing Platforms</title>
<link>https://hdl.handle.net/1721.1/162940</link>
<description>Choosing Networks for Ride-Hailing Platforms
Somsirivattana, Thana
The development of autonomous vehicles is poised to reshape the landscape of transportation. As companies prepare to deploy these vehicles on ride-hailing platforms, a key operational challenge is determining the networks on which to train the vehicles. Our work contributes toward addressing this challenge on three fronts. First, we develop a theoretical model of the network selection problem and prove theoretical results that show the importance of two parameters: the detour factor and the fleet size. Second, we develop several approaches for selecting the networks. Third, we evaluate these approaches on empirical data. We find empirical support for the importance of the detour factor and the fleet size.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162940</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>PyGridSim: A Functional Interface for Distributed System Simulation</title>
<link>https://hdl.handle.net/1721.1/162939</link>
<description>PyGridSim: A Functional Interface for Distributed System Simulation
Zhao, Angela M.
This thesis details the development of PyGridSim, an open source python module that leverages OpenDSS capabilities to provide an efficient and scalable functional interface for building distributed system simulations. Distributed power systems encompass all components that power an electrical system— from larger power plants to microgrids—and represent the network of electric consumption and production in a system. Simulations of such power systems allow experts to analyze potential faults and risks in a fast, reproducable, and cost-efficient way. Thus, the accessibility of such simulations is critical to supporting the safety and reliability of power systems. While existing packages built for distributed system simulation provide the necessary computing power and customizability of a distributed system simulator, their interfaces are hard to scale over many nodes and often have difficult-to-learn syntax. PyGridSim aims to build on these existing modules—maintaining customizability while providing a flexible, intuitive, and scalable syntax structure.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162939</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving the Programmability of A Distributed Hardware Accelerator</title>
<link>https://hdl.handle.net/1721.1/162938</link>
<description>Improving the Programmability of A Distributed Hardware Accelerator
Shwatal, Nathan A.
Sparse iterative matrix algorithms are critical to many scientific and engineering workloads, yet they perform poorly on conventional hardware. (Ōmeteōtl, a new hardware accelerator with a distributed-memory and task-based execution model, aims to address these performance bottlenecks. However, programming for (Ōmeteōtl is low-level, error-prone, and far removed from the simplicity of typical iterative formulations. This thesis presents Lapis, a domain-specific language and compiler that allows users to express sparse matrix algorithms in high-level Python code and automatically generates efficient C++ code for (Ōmeteōtl. Lapis abstracts away data partitioning and task orchestration, reducing implementation complexity: for example, it lowers lines of code by 30× for conjugate gradients and 46× for power iteration. Despite this abstraction, generated code achieves 75.7% to 92.6% of the performance of manually written implementations across several benchmarks.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162938</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow</title>
<link>https://hdl.handle.net/1721.1/162937</link>
<description>Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow
Mao, Grace
This work presents a computational investigation of the influence of geometric configurations within a hypersonic flow field on optical distortion, with a particular focus on the effects of window deformation and the role of thermochemical modeling compared to perfect gas assumptions. Turbulent RANS and conjugate heat transfer were used to model three 3D geometries in US3D, an unstructured-grid finite volume computational fluid dynamics (CFD) solver. The three investigated geometries are a flat plate with a flush-mounted sensor, an open cavity with a length-to-depth ratio of 2, and a closed cavity with a length-to-depth ratio of 16. The data demonstrate that the flat plate configuration has the best optical performance and that the closed cavity has the worst. Additionally, the inclusion of thermochemistry in the flow simulation results in a more pessimistic outlook on image quality compared to the perfect gas model. The results document optical distortion for several different geometries with and without thermochemical modeling within hypersonic flow that can inform future design decisions and research.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162937</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs</title>
<link>https://hdl.handle.net/1721.1/162936</link>
<description>BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs
Wang, Shih-Yu
There are numerous hardware security defense mechanisms designed to mitigate sidechannel attacks. However, ensuring that a defense can comprehensively protect against an entire class of attacks, while avoiding the introduction of new vulnerabilities that could lead to additional attack surfaces, remains a significant challenge. Although researchers have attempted to apply formal verification techniques to hardware security, these efforts have been hindered by scalability issues. In this paper, we introduce BlueVeri, a systematic and automatable approach for formally verifying the security of a Bluespec processor against speculative execution attacks. BlueVeri leverages the high-level information provided by Bluespec’s guarded atomic actions, simplifying and accelerating the verification process. We evaluate BlueVeri on out-of-order processors implemented in Bluespec, demonstrating that our approach substantially enhances verification scalability and is capable of proving the security properties of a minimal out-of-order processor within one hour.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162936</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stairway to Autonomy: Hierarchical Decision-Making for LLM-Guided Planning, Bandit-Driven Exploration, and Multi-Agent Navigation</title>
<link>https://hdl.handle.net/1721.1/162935</link>
<description>Stairway to Autonomy: Hierarchical Decision-Making for LLM-Guided Planning, Bandit-Driven Exploration, and Multi-Agent Navigation
Nayak, Siddharth Nagar
Autonomous multi-agent systems must efficiently plan, explore, and navigate in dynamic and unknown environments, particularly for tasks like search &amp; rescue and environmental monitoring. These settings are often characterized by partial observability, limited communication, and dynamic objectives that require flexible coordination across agents. Designing autonomy that scales with team size and task complexity requires modular decision-making systems capable of high-level reasoning, information-driven exploration, and robust decentralized execution. This dissertation presents a hierarchical decision-making framework that addresses these challenges across three complementary levels of autonomy: high-level planning, adaptive exploration, and decentralized scalable navigation. At the highest level, LLaMAR (Language Model-based Long-Horizon Planner for Multi-Agent Robotics) leverages large language models (LLMs) to decompose long-horizon tasks into structured subtasks, enabling agents to adapt their strategies dynamically. However, the effective execution of these plans requires knowledge about the environment. Our mid-level exploration strategy, BaTMaN (Banditbased Tracking and Monitoring and Navigation), systematically prioritizes waypoints that maximize information gain while balancing real-world constraints such as energy efficiency and sensor reliability. Finally, InforMARL provides a scalable, decentralized navigation by leveraging graph-based local information aggregation, improving sample efficiency, and demonstrating transferability to unseen team sizes. This dissertation develops each of these modules to address a distinct level of the autonomy stack. LLaMAR functions as the high-level planner, translating natural language goals into structured sequences of subtasks and incorporating real-time corrections through a plan-act-correct-verify cycle. BaTMaN serves as the mid-level exploration engine, guiding sensor-equipped agents to prioritize informative regions based on uncertainty. InforMARL operates at the execution level, enabling decentralized agents to navigate through dynamic environments using graph-based local information aggregation and reactive control policies. Each module is independently deployable and optimized for different challenges: strategic reasoning, data-efficient monitoring, and scalable navigation, respectively. When combined, the three modules form a coherent autonomy stack for multi-agent systems operating under uncertainty.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162935</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for Churn Prediction and Infrastructure Resilience</title>
<link>https://hdl.handle.net/1721.1/162934</link>
<description>Machine Learning Methods for Churn Prediction and Infrastructure Resilience
Agrawal, Shreeansh
This thesis investigates how advanced machine learning methods can effectively address two critical business challenges facing the telecommunications industry: short-term customer churn prediction and long-term infrastructure resilience to climate-driven disruptions.&#13;
&#13;
In the first part of this work, I develop an upgrades-informed churn forecasting model tailored specifically for marketing operations. Recognizing limitations in the existing aggregate forecasting methodologies, I create a cohort-based cascade model that explicitly integrates customer upgrade behavior across various contract tenures. To address data sparsity and longitudinal gaps in newer contract types, I employ synthetic data generation and imputation techniques, such as regression-based methods and Multivariate Imputation by Chained Equations (MICE). For forecasting churn and upgrade rates, I prioritize interpretability by applying linear regression enhanced with time-series forecasting techniques and macroeconomic indicators, including the Consumer Price Index. This approach significantly improves forecasting accuracy, aligns internal stakeholder objectives, and supports strategic decision-making around customer retention and promotional offers.&#13;
&#13;
The second part focuses on building predictive models and strategic frameworks for long-term infrastructure resilience in the face of increasing climate risks. Leveraging spatial-temporal clustering methods (DBSCAN) and advanced neural network architectures, I develop a model to attribute historical outages to extreme weather events. Further, I integrate this model with future climate scenarios from CMIP5 projections using Monte Carlo simulations, providing actionable insights into future infrastructure vulnerabilities. Employing SHapley Additive exPlanations (SHAP), I interpret model predictions, highlighting critical factors such as precipitation, windspeed, and atmospheric pressure. Additionally, I propose frameworks for quantifying financial impacts of future outages and recommend optimization strategies for proactive infrastructure hardening and emergency response.&#13;
&#13;
Collectively, these applications demonstrate the value of strategically employing interpretable and robust machine learning methodologies to enhance short-term operational decisions and long-term strategic planning within telecom organizations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162934</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation</title>
<link>https://hdl.handle.net/1721.1/162933</link>
<description>Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation
Shafferman, Hannah R.
In the field of robotics, there has been a growing interest in multi-robot systems and their potential to improve the efficiency, scale, and reliability of tasks beyond what an individual robot can achieve. Global localization is a crucial task for autonomous robot navigation, specifically in the multi-agent scenario where robots need to localize within maps communicated by other agents. The scenario where vehicles are viewing their environments from the same perspective, or camera viewpoint, is well studied. However, when environments are mapped from different camera viewing angles, traditional methods fail to match visual features and thus fail to localize. The technical gap that this thesis addresses is when autonomous vehicles within a team are mapping the same environment from different viewpoints, specifically nadir and an oblique camera orientations in an unstructured environment. Many existing visual place recognition (VPR) methods fail to match visual features that look visually different due to appearance, illumination, or viewpoint changes and thus fail to localize. In this thesis, we demonstrate the shortcomings of previous work to generalize to an off-nadir camera angle and explore the benefits and challenges that arise with utilizing oblique imagery for visual feature detection and tracking. We propose a segmentation-based object tracking pipeline to improve tracking and environment mapping performance in this traditionally challenging scenario. Our approach consists of 1) a front-end auto-segmentation tracking pipeline followed by 2) a submap correspondence search, which exploits geometric consistencies between environment maps to align vehicle reference frames. We evaluate our approach on a challenging indoor, cluttered dataset and demonstrate a maximum precision 74% higher than traditional and learning-based baseline methods, with a map size 0.5% the size of the most memory conservative traditional baseline method.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162933</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty</title>
<link>https://hdl.handle.net/1721.1/162932</link>
<description>An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty
Sonandres, Kyle A.
Aerocapture is an orbital insertion maneuver that converts a hyperbolic approach trajectory into a desired captured orbit using the aerodynamic forces generated during a single atmospheric pass. While it offers major benefits, such as reduced interplanetary cruise time and lower propellant mass reserves, it also introduces significant risk due to extreme sensitivity to atmospheric and delivery state uncertainties. This drives the need for robust guidance algorithms and accurate environmental estimation techniques. This thesis presents approaches to address both of these needs, developing solutions to improve aerocapture performance and robustness to uncertainty. The first contribution is the development of ABAMGuid+, a novel aerocapture guidance algorithm that leverages simultaneous control over bank angle and angle of attack. Inspired by optimal control theory, the algorithm uses a four-phase structure to mimic the optimal control laws while maintaining tractability for online use. Optimal control theory is utilized to identify the optimal control solutions, and numerical optimization is used to validate the analytic solutions prior to integration into a guidance algorithm. Extensive simulation results of a Uranus aerocapture scenario, including over 140,000 Monte Carlo trajectories, demonstrate significant improvements in capture success rates and propellant efficiency compared to existing methods. The second contribution addresses environmental uncertainty directly by developing a deep learning-based approach to estimate the atmospheric density profile during flight. A long short-term memory (LSTM) neural network-based architecture is trained to predict atmospheric density given sequences of flight data. The trained model is integrated into the guidance loop and a curriculum learning process is used to refine in-flight performance. Monte Carlo results show that the LSTM-augmented guidance system reduces propellant usage compared to traditional estimation methods. In summary, this thesis presents two approaches that improve aerocapture performance and robustness to uncertainty. We show that this added robustness can be achieved both by expanding algorithmic ability and by improving environmental estimation approaches.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162932</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations</title>
<link>https://hdl.handle.net/1721.1/162931</link>
<description>Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations
McGee, Carissma
Gravitational microlensing is a phenomenon in which a foreground star or planet briefly magnifies light from a more distant background star. This effect enables the discovery of exoplanets that are otherwise undetectable, including those orbiting faint hosts and at large separations. Microlensing is well suited to characterizing exoplanets beyond the snow line, revealing mass ratios and orbital geometries inaccessible to transit or radial velocity methods. The Nancy Grace Roman Space Telescope will carry out the Galactic Exoplanet Survey to detect thousands of microlensing events with the cadence and precision necessary for statistical exoplanet population studies. To verify Roman’s ability to meet its core science requirement, recovering the lens mass and distance in at least 40% of planetary events with better than 20% uncertainty, targeted simulations are essential. Using the pyLIMASS inference framework and Fisher matrix-based uncertainty propagation, I demonstrate that for the well-characterized event OGLE-2013-BLG-0132Lb, the lens mass can be constrained to within 18.7% uncertainty, validating the feasibility of Roman’s requirement on a case-study basis. This thesis also addresses the legal and policy foundations needed to ensure global access to these simulation tools. By advancing open-source software models and proposing a space IP framework for equitable knowledge sharing, it supports collaborative scientific infrastructure for future international space missions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162931</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts</title>
<link>https://hdl.handle.net/1721.1/162930</link>
<description>Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts
Mueller, Anna
Despite significant innovations in aviation technology over the last 70 years resulting in enormous efficiency improvement, the rising demand for air travel means that aviation carbon emissions continue to increase each year. The rate of improvement to aircraft propulsion engines is diminishing and additional improvements often add significant engine cost or weight. With the goal of reducing aviation’s contribution to global climate change, future aircraft engine designers must consider concepts that stray from the traditional turbofan engine. In this thesis, I develop an engine cycle model combining the turbofan engine with a steam power cycle and use the model to explore the benefits of applying this concept to aircraft engines. In order to study the impact to engine performance and emissions from adding a steam cycle, the engine model needs to be capable of representing the water phase changes and the heat exchangers required to drive those phase changes. My contribution is the development of such a model – with special attention to the modeling of water properties and phase change of water – which ties heat exchanger models into an engine thermodynamic model. The engine cycle as well as heat exchanger parameters including water-to-air ratio, combustor exit temperature, overall pressure ratio, and water pressure are varied to explore the impact to overall engine performance, including the impact of the added heat exchanger weight. This thesis covers the development and initial testing of this model, which enables future studies in engines with phase changing heat exchangers or water injection with the goal of assisting the search for the future engine technologies that will reduce harmful impacts of aviation while continuing to allow air travel.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162930</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems</title>
<link>https://hdl.handle.net/1721.1/162929</link>
<description>An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems
Hoss, Summer A.
There are countless challenges associated with the accurate modeling of the hypersonic flight of ablative thermal protection systems (TPS): resolving the relevant coupled physical phenomena through multi-physics simulations, the management of the disparate spatiotemporal scales associated with the fluid and solid responses, and establishing a reliable numerical model able to predict the response of ablative materials exposed to extreme gradients—to name a few. The two-way, loosely coupled framework presented in this thesis consists of ΣMIT, a multi-physics computational solid mechanics (CSM) code, coupled with US3D, a hypersonic computational fluid dynamics (CFD) solver, to form a complete aero-thermochemo-mechanical simulation framework. The ΣMIT-US3D coupling framework provides a step towards high-fidelity simulation capabilities for hypersonic vehicles with ablative TPS, establishing a strong foundation for the simulation of fluid-structure interaction (FSI) phenomena and computation of the mechanical response of porous ablators. The requirement of a robust numerical formulation for the solution of hypersonic pyrolysis problems was made apparent when encountering numerical convergence issues with legacy methods, which sparked the development of a robust semi-implicit pyrolysis material model. The so-called Linearized Pyrolysis model employs simplifying assumptions for the energy and mass balance equations and relies upon the time-lagging of chosen terms to achieve linear convergence and robust performance. The performance of the model has been validated against the Ablation Workshop Test Cases and has increased the range of allowable representative hypersonic boundary conditions significantly compared to the legacy approach. Together, the model and the coupling framework are applied to two aero-thermochemo-mechanical analyses contained within the thesis: a spherical-tipped nose cone and the Orion heat shield. Preliminary results identify the decomposition region as a zone in which high von Mises stress tends to occur—care must be taken to ensure that internal and external flight loads do not exceed allowable limits to prevent catastrophic TPS material failure in this region. However, perhaps the most significant insight resulting from the framework relates to the computation of mass fluxes through the porous ablative material, revealing that for an isotropic monolithic heat shield with at a zero angle of attack, pyrolysis gas flow is driven by the pressure gradient applied to the shield such that the flow exits at the edges of the shield rather than from the base.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162929</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aeroverse: Aerospace Education in Extended Reality</title>
<link>https://hdl.handle.net/1721.1/162928</link>
<description>Aeroverse: Aerospace Education in Extended Reality
Johnson, Mollie
Aerospace education is a continuously evolving field that is increasingly dependent on digital tools. However, it is ambitious to shift the teaching paradigm to accommodate new cutting-edge technologies. Extended reality (XR), which encompasses augmented (AR) and virtual reality (VR), is an example of such technology. In recent years, VR has seen an increase in usage in education as a novel way to provide students with immersive learning experiences, and XR has a long history of use within the working aerospace industry. However, application in the overlap between the two— aerospace engineering education— remains largely unexplored to date. The themes addressed in this thesis are two-fold: first, the goal is to create VR learning modules to supplement the existing aerospace engineering curriculum. Second, the aim is to validate whether VR technology as a teaching medium can improve learning outcomes and student engagement within the MIT AeroAstro department. With these themes in mind, two experiments were conducted to explore this topic. The first experiment presents the design and execution of an experimental course aimed at aerospace engineering students to assess the educational impact of VR. Over the course of this study, ANOVA and Kruskal-Wallis tests found that there was no significant difference (p &gt; 0.05) in performance between the VR and non-VR groups, save for a few exceptional cases. The second experiment details the integration of a single VR module into an existing course in which all students interacted with the VR activity. Students responded positively to this experiment, reporting increased feelings of engagement and a sense that it aligned well with the rest of the course. One-sample Wilcoxon tests reveal that these findings are largely significant (p &lt; 0.05). This thesis advances the work on assessing VR use for aerospace education. The implications of this work may influence the decisions of other educators regarding the adoption of VR technology as supplements to their own teaching methodologies. As a whole, this thesis contributes to the broader conversation on integrating VR into the classroom.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162928</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration</title>
<link>https://hdl.handle.net/1721.1/162927</link>
<description>Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration
MacRobbie, Madelyn
Human space exploration is evolving rapidly, with commercial successes and NASA’s Artemis missions driving rapid growth and innovation. Plans for longer, larger, and more complex missions necessitate development of new mission architectures to sustain the crews needed to support these missions. Larger missions and multi-site architectures have become feasible with advances in commercial launch vehicles, and generate increased safety and redundancy for crewed operations. However, crew dynamics in these mission architectures have yet to be investigated. This thesis investigates the role of mission architecture (specifically single-site versus dual-site configurations) in subgroup formation and the resulting impacts to socioemotional well-being. We first develop a systematic approach for optimizing analog mission design, then apply this to design two analog missions to compare the effects of single-site and dual-site mission architectures on crew dynamics and psychosocial health. Results provide valuable insights for future Mars mission design, where crew structure and psychosocial adaptation are critical to mission success.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162927</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Strong, Human-Compatible Codenames AI&#13;
Agent</title>
<link>https://hdl.handle.net/1721.1/162926</link>
<description>Towards a Strong, Human-Compatible Codenames AI&#13;
Agent
Zhu, Sebastian
Current language models are limited in their ability to solve complex planning and reasoning problems without the aid of search procedures. While a large body of work has developed search procedures tailored to single-turn, single-user natural language interactions, language generation in multi-agent contexts involving multiple users, imperfect information, and partially misaligned objectives remains extremely challenging. We aim to build search procedures that will enable language models to assist with interactive, multi-agent decision-making in a diverse range of contexts. Using the word game Codenames as a benchmark, we will combine game-theoretic planning procedures with basic language model-based scoring methods to create agents that both play strong policies and play well with human policies. This work yields a set of practical text generation procedures, new evaluation benchmarks, and foundational algorithmic improvements in language model search.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162926</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation into Contrail Observability from Different Satellite Platforms</title>
<link>https://hdl.handle.net/1721.1/162925</link>
<description>An Investigation into Contrail Observability from Different Satellite Platforms
Euchenhofer, Marlene V.
Contrails are line-shaped ice clouds that can form behind aircraft engines and, under certain cold and moist conditions, spread into contrail cirrus that persists for several hours. By adding to the existing cloud cover, contrails can act to either cool or warm, with the latter, on average, being dominant, resulting in an overall warming effect. Although the effective radiative forcing from contrails is inferred to be of the same order of magnitude as that caused by aviation’s CO₂ emissions, large uncertainties remain around specific radiative forcing estimates. &#13;
Observational studies of contrails, either to support climate impact assessments or operational contrail avoidance strategies, face trade-offs between spatial and temporal resolution. Many recent publications have relied on data from geostationary satellites accepting lower input data resolution in exchange for higher temporal resolution and greater spatial coverage. Limitations of the observability of contrails in the resulting images have not been sufficiently investigated and need to be assessed and quantified.&#13;
This study aims to leverage the higher spatial resolution of VIIRS satellite imagery to identify potential limitations on contrail observability in lower-resolution GOES ABI imagery. We generate a dataset of human-identified contrails visible in false-color thermal infrared imagery from both GOES ABI and VIIRS for twelve scenes over the contiguous US. Based on this dataset, we investigate the number, cover, and appearance of the observed contrails. We find that GOES ABI does not resolve 80% of all contrails that can be identified in VIIRS imagery and only shows half of the total observed contrail length. Finally, incorporating an existing contrail-flight matching algorithm by Barbosa, we show that VIIRS tends to resolve more younger contrails than GOES ABI. The findings from this study help to bound the validity of current contrail simulations and modeling outputs that estimate contrail cover and occurrence.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162925</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution</title>
<link>https://hdl.handle.net/1721.1/162924</link>
<description>MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution
Zhang, Sophie S.
The increasing adoption of specialized database systems has led to the rise of heterogeneous data environments. While having multiple engines in a data infrastructure enables opportunities for workload optimization, SQL dialect incompatibility makes workload migration difficult. To address this challenge, we develop MINCE (Multi-dialect INtegration and Crossengine Execution), a technique that decomposes SQL queries into parts to enable federated execution across engines with differing SQL dialects. MINCE uses a rule-based method to partition a query into executable components that are assigned to different database systems. To evaluate different execution strategies, MINCE further implements a cost model that incorporates both on-engine query execution time and inter-system data transfer overhead. We evaluate MINCE on a TPC-H-based workload augmented with PostgreSQL-specific functions unsupported in Amazon Redshift. Experimental results show that MINCE produces the fastest execution strategy among our baselines for 72.1% of queries using estimated cardinality, achieving a 2× speedup over single-engine baselines. With perfect cardinality information available to our cost model, this value increases to 88.4%, with an average 2.8× speedup. These results demonstrate that our system not only enables more flexible federated query execution, but also reliably identifies performant execution strategies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162924</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New results in canonical polyadic decomposition overfinite fields</title>
<link>https://hdl.handle.net/1721.1/162923</link>
<description>New results in canonical polyadic decomposition overfinite fields
Yang, Jason
Canonical polyadic decomposition (CPD) consists of expressing a tensor (multidimensional array) as a sum of several rank-1 tensors, each of which is an outer/separable product of vectors. The number of rank-1 tensors used in a CPD is called the rank of the CPD, and the minimum possible rank of a CPD for a given tensor is called the rank of the tensor. CPD is at the core of fast matrix multiplication, a computational problem with widespread implications across several seemingly unrelated problems in computer science. Much recent progress in this field has used randomized heuristic search to find new CPDs, often over a finite field. However, if these techniques fail to find a CPD with low enough rank, they cannot prove that no such CPD exists. Consequently, these methods fail to resolve certain long-standing questions, such as whether the tensor corresponding to 3 × 3 matrix multiplication has rank less than 23. To make progress on these problems, we develop a novel algorithm that preserves exactness, i.e. they can provably verify whether or not a given tensor has a specified rank. Compared to brute force, when searching for a rank-R CPD of a n0 × · · · × nD−1-shaped tensor over a finite field F, where n0 ≥ · · · ≥ nD−1, our algorithm saves a multiplicative factor of roughly |F| R(n0−1)+n0( P d≥1 nd) . Additionally, our algorithm runs in polynomial time. We also find a novel algorithm to search border CPDs, a variant of CPDs that is also important in fast matrix multiplication. Finally, we study the maximum rank problem and give new upper and lower bounds, both for families of tensor shapes and specific shapes. Although our CPD search algorithms are still too slow to resolve the rank of 3 × 3 matrix multiplication, we are able to utilize them in this problem by adding extra search pruners that do not affect exactness or increase asymptotic running time.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162923</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parameter Estimation for Anonymous Hawkes Processes</title>
<link>https://hdl.handle.net/1721.1/162922</link>
<description>Parameter Estimation for Anonymous Hawkes Processes
Wang, William
Hawkes Processes are self-exciting point processes used to model many real-life networks in which an event from one agent causes the rate at which events occur from related agents to increase, such as in earthquake networks or social media. This project investigates the question of finding the underlying structure of the Hawkes Processes given a history of when events occurred. This problem has been studied extensively in the regime where the event labels are known, and the bulk of the literature involves parameterizing the model and passing it through statistical learning tools. Our proposed work focuses on the the same question in “anonymous" case where labels are not given. In this regime, the lack of information makes many previous approaches intractable and we develop novel non-parametric approaches for solving cases of the structure learning problem in algorithmic and information theoretic settings. Our results show the ability to learn the entire model under mild assumptions in the information theoretic regime, where we have access to an arbitrarily long Anonymous Hawkes Process transcript, whereas when we’re confined to a polynomially lengthed transcript, the situation is considerably more difficult.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162922</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organization Infrastructure for Tokenized Asset Records</title>
<link>https://hdl.handle.net/1721.1/162921</link>
<description>Organization Infrastructure for Tokenized Asset Records
Whartenby, Patrick E.
The Tokenized Asset Record (TAR) represents a way to connect existing technology related to tokenized assets and asset schemas to real-world documents that validate the existence of an object. Exactly who should manage TARs and the properties of the related organization schemes remains an open question. Answering this question is crucial to furthering the existing digital economy. While existing solutions have sought to expand digital commerce through pioneering digital clearing houses, little work has explored support for other classes of real-world digitized assets with proof of ownership and existence. The research proposed here seeks to answer this question by suggesting possible solutions and developing a framework for uniformly analyzing the proposals. The research proposes and evaluates three models for the management of TARs. The first is a scheme that involves each industry setting up its own TAR database and managing the system independently from other industries. The second proposes hosting all TARs on a single blockchain. The third argues for an off-chain decentralized platform to host all, akin to the Data Spaces proposed by the European Union. The research finds, based on the proposed criteria, that a decentralized off-chain approach best meets the goals of a TAR management framework.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162921</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective</title>
<link>https://hdl.handle.net/1721.1/162920</link>
<description>Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective
Thadawasin, Pakaphol
Foundation models have emerged as powerful tools for analyzing single-cell RNA sequencing (scRNA-seq) data, leveraging large-scale pretraining to capture complex gene expression patterns. However, a comprehensive quantitative framework for understanding the interplay between phenotypes and genotypes remains underdeveloped. Such a framework is critical not only for validating model performance but also for uncovering previously unrecognized biological relationships. In this work, we present both traditional and deep learning-based quantitative analysis pipelines for PolyGene [1], a transformer-based scRNA-seq foundation model, aimed at disentangling the complex phenotype–genotype relationship. First, we implement a top-k classification and entropy evaluation pipeline to serve as a primary validation framework. Our results demonstrate that the pretrained PolyGene [1] is robust in top-k classification metrics and provides meaningful insights into the entropy landscape of human cells across different life stages. Second, we propose a novel deep learning gradientbased gene selection method designed to address limitations in traditional feature selection approaches, such as poor scalability and sensitivity to heterogeneity in high-dimensional data. Through empirical evaluations on benchmark scRNA-seq datasets, we show that our method enhances model interpretability and improves downstream performance, offering a more scalable and biologically relevant alternative to existing techniques. Overall, this work introduces a set of quantitative analysis tools that fill a critical gap in evaluating and interpreting scRNA-seq foundation models, contributing to a deeper understanding of the genotype–phenotype interplay through modern deep learning techniques.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162920</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems</title>
<link>https://hdl.handle.net/1721.1/162919</link>
<description>Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems
Zen, Hilary
Generation methods for deepfake images have advanced rapidly, and deepfake face images pose a critical security for biometric verification systems. Applications that rely on face recognition to grant access to sensitive data need to maintain high accuracy across a wide variety of deepfake generation methods, including novel and developing types that the application has not previously trained on. Current deepfake detection models achieve nearperfect accuracy on benchmark datasets, but do not perform as well on unseen types of deepfakes that were not part of their training dataset. We propose building an ensemble model with multiple base detectors, each trained on different generation model families to maintain high performance across many deepfake generation methods. Using four base models, including two models with the same architecture and training data, we exhaustively test all possible ensemble models. We find that combining similar base models trained on the same deepfake generation family does not improve performance compared to the individual base models. However, combining base models trained on different deepfake generation families leads to significant increases in accuracy and recall. Our ensemble framework provides a flexible and inexpensive solution in the ever-changing landscape of deepfake generation and security.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162919</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>When Should Model Updates Propagate?</title>
<link>https://hdl.handle.net/1721.1/162918</link>
<description>When Should Model Updates Propagate?
Struckman, Isabella Marguerite
AI supply chains rely increasingly on downstream developers adapting pretrained upstream models. When upstream models are retrained with data deletions (which may be prompted by copyright violations, privacy compliance, or removal of illicit content), it’s unclear if all downstream developers must also undergo costly retraining. In this thesis, we investigate the propagation of data deletions through fine-tuned models within a controlled visual classification setting comprising dog-breed and plane-manufacturer recognition tasks. We show that not all model updates propagate equivalently to downstream tasks, and there is a strong relationship between the deleted data’s relationship to the downstream task and its affect on the downstream model. We demonstrate that neither simple performance metrics (accuracy or F1), nor output-level divergences, nor even embedding-based similarity metrics alone adequately predict when a deletion meaningfully impacts downstream tasks. To overcome these limitations, we introduce an information-theoretic metric grounded in Gaussian mixture modeling (GMM) of embedding distributions, capturing deeper representational shifts. Our proposed approach precisely distinguishes when deletions require downstream retraining, achieving high predictive accuracy and recall without directly accessing retrained downstream models.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162918</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Twin Modeling for NV Magnetometry</title>
<link>https://hdl.handle.net/1721.1/162917</link>
<description>Digital Twin Modeling for NV Magnetometry
Rich, John P.
This thesis presents the development and application of a digital twin modeling framework for nitrogen-vacancy (NV) center-based magnetometry, advancing the field of quantum sensing. A surrogate model serves as a computational representation of the physical NV magnetometer system, enabling comprehensive exploration of parameter spaces to optimize device design. Leveraging machine learning techniques, this study optimizes control mechanisms, including the design of learned analog filters, to enhance system performance. This research investigates the fundamental limits of NV magnetometer performance, identifying strategies to minimize power requirements while maintaining high sensitivity. A dynamic framework is implemented to update the surrogate model’s parameters in real-time based on experimental measurements, ensuring accurate fidelity to the physical system. Additionally, the optimized control strategies are simulated within the digital twin environment, demonstrating their potential for advanced quantum sensing applications such as magnetocardiography (MCG) for heartbeat detection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162917</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fuzzing for User-Schedulable Languages</title>
<link>https://hdl.handle.net/1721.1/162916</link>
<description>Fuzzing for User-Schedulable Languages
Moon, Kenneth
Performance engineers restructure programs to use hardware as efficiently as possible. Even simple mathematical functions can become sprawling and complex programs when fully optimized, as the resulting code must often be precisely molded around specialized behaviors supported by the hardware. To help performance engineers deal with this complexity, userschedulable languages provide scheduling operations, which are abstractions of common steps taken to restructure programs. By composing these scheduling operations, performance engineers can concisely represent their intended optimizations to programs. Exo, being a user-schedulable language, provides this abstraction with the additional guarantee that any scheduling operation which passes Exo’s automated checks does not change the behavior of the program. Though this guarantee is useful for avoiding bugs while optimizing a program, the analysis required to provide such a guarantee is infeasible on programs in general. To make analysis feasible, Exo only allows users to write programs with a restricted set of behaviors. As a result, some programs are impossible to schedule using Exo, limiting the use cases of Exo. In this thesis, we explore how fuzzing can be used as an alternative to the existing analysis in Exo, with the goal of allowing Exo to analyze more complex programs. “Fuzzing” refers to a test case-driven approach to determining properties of a program, such as whether its behavior changes after a scheduling operation. If the program’s outputs do not change after the scheduling operation when provided the same inputs, the fuzzer concludes that the program’s behavior did not change. Since fuzzing only requires us to know how to evaluate the program, it can be applied to a much broader set of programs than the existing analysis in Exo. However, fuzzing can miss mistakes in scheduling if the fuzzer fails to find a test case demonstrating the issue with a scheduling operation, as it is a complete form of analysis rather than a sound form of analysis like the existing analysis in Exo. Additionally, fuzzing can be costly compared to the original analysis, as repeatedly running programs on many test cases for many scheduling operations can be slow. We explore ways to mitigate these issues throughout this work. Finally, we evaluate our implementation of the fuzzer and its performance on some example use cases for Exo.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162916</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Verifiable Computation Made Easy</title>
<link>https://hdl.handle.net/1721.1/162915</link>
<description>Efficient Verifiable Computation Made Easy
Ma, Chengyuan
Recent advancements in cloud computing, data privacy, and cryptography have sparked a growing interest in Verifiable Computation (VC) in both industry and academia. In particular, zero-knowledge proof (ZKP) algorithms are gaining rapid traction due to their strong privacy guarantees. However, they are notoriously computationally intensive, making performance a critical concern. Given the inherent data parallelism and heavy use of vector operations in ZKP computations, multicore CPUs and GPUs offer a promising acceleration path. Unfortunately, accelerated programming for ZKP remains challenging: ZKP algorithms evolve rapidly, their structures grow increasingly complex, and writing high-performance ZKP code is tedious, error-prone, non-portable, and unfriendly to algorithm developers. We present an end-to-end compiler framework, Zera, that lowers ZKP algorithms to parallel hardware for efficient acceleration, with minimal programmer effort. By effectively leveraging ZKP algorithm patterns and trends, we are able to automate the key performance optimizations, with a succinct linguistic extension and a set of practical compiler customizations. Consequently, with just 92 lines of trivial high-level annotation added to the original 7,000 lines of C++ code, our single-source code solution delivers 33.9× and 24.0× speedup on GPU over a highly optimized serial C++ implementation on CPU and an existing multithreaded Rust baseline on CPU, respectively. Compared to our hand-optimized GPU/CUDA implementation requiring an extra 2,000 lines of low-level code (roughly 60 programmer hours), our compiler-generated GPU implementation is only 58% slower (1.58× slowdown) on large inputs, demonstrating a compelling trade-off between performance and productivity.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162915</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Partitioning for Efficient Parallel Reads</title>
<link>https://hdl.handle.net/1721.1/162914</link>
<description>Optimizing Partitioning for Efficient Parallel Reads
Sragow, John
Modern database management systems spend a significant portion of query execution time scanning data, so minimizing scanning latency is critical to maintaining high performance. As such, databases are partitioned into blocks so that queries can skip irrelevant tuples and avoid scanning the entire database. When this partitioning is optimized to minimize the number of blocks accessed by each query, smaller queries that access very few blocks fail to fully utilize the bandwidth because they cannot take advantage of parallel reading. However, reducing the size of each block in order to increase the number of blocks accessed by smaller queries slows down larger queries by forcing them to increase the number of I/Os they must perform. We propose a novel partitioning scheme that shuffles the row groups of blocks accessed by smaller queries so that they can read fewer tuples from multiple blocks in parallel without increasing the I/O cost of larger queries. Our experiments show that this technique allows smaller queries to be scanned up to twice as fast on larger block sizes as they would on a standard partitioning without significantly slowing down larger queries.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162914</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models</title>
<link>https://hdl.handle.net/1721.1/162913</link>
<description>Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models
Tang, Adrina
Designing novel proteins with specific biological functions remains a fundamental challenge in computational biology. While recent advances in protein language models have enabled powerful sequence-based representations, most models, including state-of-the-art systems like ESM3, fall short in effectively encoding functional context during protein generation. In this work, we present a multimodal protein co-design framework that conditions sequence generation on fine-grained functional annotations, specifically leveraging residue-level Gene Ontology (GO) term labels on sequences from the UniRef100 database. By explicitly associating functional signals with residue elements of proteins, our model learns to generate function-conditioned protein sequences that are biologically plausible and semantically consistent. Unlike prior approaches, which treat function as a secondary feature or a classification task, our method focuses on joint reasoning over function and sequence during the design process. This closes a critical gap in the current landscape of protein design tools, offering a scalable and generalizable approach to co-designing protein sequences with user-specified functional profiles.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162913</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pairwise Matching of Intermediate Representations for Fine-grained Explainability</title>
<link>https://hdl.handle.net/1721.1/162912</link>
<description>Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Shrack, Lauren
The differences between images belonging to fine-grained categories are often subtle and highly localized, and existing explainability techniques for deep learning models are often too diffuse to provide useful and interpretable explanations. We propose a new explainability method (PAIR-X) that leverages both intermediate model activations and backpropagated relevance scores to generate fine-grained, highly-localized pairwise visual explanations. We use animal and building re-identification (re-ID) as a primary case study of our method, and we demonstrate qualitatively improved results over a diverse set of explainability baselines on 35 public re-ID datasets. In interviews, animal re-ID experts were in unanimous agreement that PAIR-X was an improvement over existing baselines for deep model explainability, and suggested that its visualizations would be directly applicable to their work. We also propose a novel quantitative evaluation metric for our method, and demonstrate that PAIR-X visualizations appear more plausible for correct image matches than incorrect ones even when the model similarity score for the pairs is the same. By improving interpretability, PAIR-X enables humans to better distinguish correct and incorrect matches.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162912</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling End-to-End Sensitivity Analysis of Integrated&#13;
Models</title>
<link>https://hdl.handle.net/1721.1/162911</link>
<description>Enabling End-to-End Sensitivity Analysis of Integrated&#13;
Models
Davidson, Rosemary K.
As space-based precision-pointed telescopes continue to grow in scale and complexity, integrated models are increasingly relied upon to inform early design decisions and support system-level verification. When ground testing of full-system configurations is infeasible, integrated models, including structural-thermal-optical performance models, are essential for predicting performance and validating requirements across multidisciplinary, coupled domains. In early design phases, when uncertainty is high and design decisions have long-term implications for cost and schedule, it is especially important to understand which uncertain parameters most influence system performance. Global sensitivity analysis can help identify dominant uncertainty sources and inform decisions about model reduction, testing priorities, and resource allocation. However, the computational cost of applying global sensitivity analysis to integrated models often exceeds available resources. The presence of cross-disciplinary coupling between subsystem models further complicates analysis efforts. Coupled and dependent variables obscure how specific inputs influence system-level performance, limiting the ability to reduce model dimensionality or focus testing efforts on individual subsystems. There is a need for integrated modeling methodologies that enable tractable global sensitivity analysis of large, feedforward-coupled systems while preserving the accuracy needed to support early-phase design.&#13;
&#13;
This thesis develops both exact and approximate methods for performing global sensitivity analysis on integrated models. A set of exact propagation techniques is introduced to compute end-to-end sensitivity indices when specific structural conditions are met, including functional linearity, non-interacting transforms, and monotonic intermediate mappings. These methods are evaluated using a suite of benchmark test cases that isolate when the exact sensitivity analysis method is valid and when structural assumptions begin to break down. A modular modeling framework is developed to compute exact or approximate end-to-end sensitivity indices and to enable automated mapping between disciplinary models in the integrated chain. The approach is also applied to a representative linearized structural-thermal-optical performance model, demonstrating how end-to-end global sensitivity analysis can be performed efficiently across thermal, structural, and optical subsystems.&#13;
&#13;
To extend tractable sensitivity analysis to black-box models, several approximate strategies are introduced, including multifidelity surrogate modeling and statistical regression. These methods support both forward uncertainty propagation and variance-based global sensitivity analysis for structurally complex integrated models, without requiring full-system evaluation at every iteration. Together, the exact and approximate strategies developed in this work provide a foundation for scalable end-to-end global sensitivity analysis in early-phase design, where identifying influential parameters and constraining model complexity are essential for evaluating candidate architectures and informing mission decisions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162911</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning for Space Object Density Distribution&#13;
Prediction</title>
<link>https://hdl.handle.net/1721.1/162910</link>
<description>Deep Learning for Space Object Density Distribution&#13;
Prediction
Sarangerel, Sumiyajav
The rapid growth of artificial objects in Low Earth Orbit (LEO) has heightened concerns over orbital congestion and collision cascades, known as Kessler Syndrome. Traditional high-fidelity models, while accurate, are computationally intensive and poorly scalable. This thesis introduces a machine learning–based framework for forecasting the long-term evolution of space object density. A large dataset is generated, using the MIT Orbital Capacity Assessment Tool – Monte Carlo (MOCAT-MC), simulating thousands of scenarios across varying launch, disposal, and maneuver parameters. A Convolutional Gated Recurrent Unit (ConvGRU) is trained to predict density distributions over a 100-year horizon, achieving accurate forecasts with significantly reduced runtime. With a simple guidance mechanism, the generalization capability of the model across diverse scenarios is greatly improved. This approach offers a scalable and efficient tool for supporting future space traffic management and sustainability efforts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162910</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning</title>
<link>https://hdl.handle.net/1721.1/162909</link>
<description>Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning
Shi, Yichuan
The emergence of large-scale machine learning (ML) models has highlighted a fundamental conflict: While computational demands push for the consolidation of data and models in vast, centralized data centers, real-world data continues to be distributed and fragmented across personal devices and private databases. How can we reconcile this contradiction without further monopolizing the ML ecosystem? What unique privacy and security risks arise from alternative ML orchestration system designs? Furthermore, how do these vulnerabilities and system failures inform our understanding of both how and what machines learn? This thesis attempts to explore these questions. It first examines key types of privacy leakages, evaluating their impact under realistic, cross-distribution settings. It then introduces a benchmarking analysis platform, SONAR, to investigate the relationship between privacy leakage (measured by attack performance), network topology, and data distribution. Finally, it presents Co-Dream, a novel algorithm for collaborative learning that offers improved privacy characteristics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162909</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prototyping a Scalable Proof Engine</title>
<link>https://hdl.handle.net/1721.1/162908</link>
<description>Prototyping a Scalable Proof Engine
Rosario, Jon
Formal verification is an exciting development in software engineering, enabling implementations of programs to be rigorously checked against mathematical specifications. Assuming the specification is well-defined, formal verification provides guarantees of a program’s correctness and freedom from bugs that are simply not possible with test-based methods. There’s just one catch: the process of verifying large programs in popular theorem provers such as Coq (now known as Rocq) or Lean is painfully slow. These proof assistants rely on proof engines to construct proofs of correctness for given properties, but to our knowledge, there is no widely available proof engine that offers strong performance guarantees. Even more frustrating is the lack of consensus on what “good” performance should even mean in this context. This thesis lays the groundwork for addressing that gap by presenting a proof engine design that achieves asymptotically linear-time performance with respect to several important variables. We illustrate the design and its performance characteristics with examples from an implementation of the design and outline directions for future work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162908</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embedded Computing for Wavefront Control on Future Space&#13;
Telescopes</title>
<link>https://hdl.handle.net/1721.1/162906</link>
<description>Embedded Computing for Wavefront Control on Future Space&#13;
Telescopes
Belsten, Nicholas
Future space telescopes will use adaptive optics to suppress starlight to directly image and characterize exoplanets. A measurement using this technique may be the first to detect extraterrestrial life in the universe. However, the real-time execution of adaptive optics control algorithms places unprecedented demands on spaceborne processors. Previous work has determined that processing limitations can degrade the achievable contrast and scientific yield of future exoplanet imaging missions. In this work, we quantify the relationship between adaptive optics processing needs and high contrast performance for the Habitable Worlds Observatory (HWO), a mission expected to launch in the 2040s and achieve the 10^-10 contrast necessary to image Earth-like planets around Sun-like stars.&#13;
&#13;
We survey the current landscape of high-order wavefront sensing and control (HOWFSC) algorithms for a future mission like HWO. We parameterize the compute requirements of multiple algorithms through analyses of computational patterns, benchmarks, and problem scaling. In parallel, we assess the capabilities of current and emerging spaceborne processors. We integrate these findings to model processor requirements across several dimensions of telescope design, and we predict whether various processor choices can meet the computational demands of specific HWO configurations. To validate our models, we implement HOWFSC algorithms on representative embedded processors and compare measured performance to predictions. These implementations also reduce risk for spaceflight by increasing the technology readiness level (TRL) of the algorithm–processor pairing to TRL 4.&#13;
&#13;
Given the significant uncertainty in HWO’s eventual design, we extend our deterministic models using Monte Carlo methods to evaluate system performance under uncertainty. We identify key sources of uncertainty and estimate the achievable contrast across a range of system configurations. Our results show that offloading computation to the ground is an important architectural option for most HWO designs. Even under optimistic assumptions, current space processors are insufficient to support the full range of HWO configurations. However, newly developed efficient algorithms substantially reduce the computational burden. Overall, we estimate that current technology has only a 40% probability of supporting HWO’s mission goals without additional architectural innovations. We conclude by recommending combinations of onboard computing, ground offloading, and optical design constraints to help close this technology gap as the mission design matures. In particular, we find that telescope stability and ground-in-the-loop performance are primary drivers of contrast performance, while algorithmic advances such as AD-EFC and onboard compute approaching ground-based GPU performance also provide significant benefits.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162906</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis</title>
<link>https://hdl.handle.net/1721.1/162905</link>
<description>Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis
Paulin, Cole J.
We present a simulation-driven method for optimizing the structural performance of 3D printed objects made with recycled and fresh filament. Although sustainable materials such as recycled PLA reduce environmental impact, they often exhibit degraded or inconsistent mechanical properties, making them less suitable for structurally demanding applications. To address this, we develop a finite element analysis (FEA) pipeline that simulates stress and strain distributions under user-defined loading conditions, enabling intelligent segmentation of the object into regions of high and low mechanical demand. These segmented regions can be assigned recycled or fresh material during fabrication. Our system leverages open-source tools (SfePy) for simulation and we validate its accuracy against Abaqus, a commercial industry standard. We also introduce methods for automatically identifying and correcting segmentation artifacts, such as small disconnected islands. Through comparative simulation studies and performance evaluation, we demonstrate that our approach enables more sustainable 3D printing without sacrificing structural reliability
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162905</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System</title>
<link>https://hdl.handle.net/1721.1/162904</link>
<description>Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System
Lohier, Sebastien
This thesis proposes a novel methodology for the automatic placement of Power Electronics Building Blocks (PEBBs) in modular, integrated power corridor designs. These building blocks, which are created and tested offsite for a variety of applications, are currently placed manually during the design process, a method that is time-consuming and suboptimal. To address this challenge, we reduce the placement problem to a 2D bin-packing problem, leveraging a hybrid approach combining Genetic Algorithms and Simulated Annealing. This approach enables the generation of optimized placements that find the extremes of arbitrary heuristics, including minimizing routing distance and power density, effectively improving both design efficiency and system performance. The proposed methodology offers a significant step toward automating and optimizing the layout of power electronic components in complex systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162904</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes</title>
<link>https://hdl.handle.net/1721.1/162752</link>
<description>An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes
Lee, Ju Young
The demand for kidney transplants continues to outpace supply, with over 89,792 patients on the waitlist as of September 2024, yet only 27,332 transplants performed in 2023 [1], and 28% of recovered kidneys going non-utilized [2]. In this thesis, we highlight the use of large language model (LLM) embeddings combined with structured tabular data to build a predictive classifier that estimates offer outcomes for kidney donor-recipient matches. For each predictive model deployed, we provide further analysis on the interpretability of these black-box models using a custom-designed SHAP analysis framework. Our study focuses on three distinct U.S. regions (Regions 1,2,3) with markedly different demographics and amounts of data on organ acceptances (Region 1: 43,126 offers with 2.19% acceptance rate, Region 2: 394,640 offers with 1.57% acceptance rate, Region 3: 169,342 with 2.23% acceptance rate in years 2016-2019). Among the baseline XGBoost models, Region 3 achieved the highest performance, with a precision-accept score of 0.929 and accuracy of 0.993 in the test data. Building on this strong foundation, the multimodal TabText model in Region 3 achieved the best performance overall, with a precision-accept score of 0.959 and accuracy of 0.993 after fine-tuning for six epochs. Our findings suggest that increasing the number of text features, extending training epochs, and incorporating explicit numerical values led to improved model performance in Region 3. In Regions 1 and 2, the baseline model outperformed the TabText model, suggesting that data sparsity in these regions may have limited the effectiveness of the multimodal approach and that further hyperparameter tuning is needed. We also present several visualization techniques to enhance model interpretability. Specifically, we developed a novel SHAP explainer that illustrates feature interactions between multimodal inputs, including both tabular and textual data. Additionally, we explored methods to identify regions of high and low model fidelity by mapping per-sample prediction errors onto t-SNE embeddings. Overall, this thesis introduces new directions for transplant research in the context of transformer-based models and interpretable AI. Leveraging data-driven decision-support tools and refining allocation policies are essential steps toward addressing the persistent gap between supply and demand in the kidney transplant landscape.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162752</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Medium Access Control Protocol for Satellite Constellations</title>
<link>https://hdl.handle.net/1721.1/162751</link>
<description>Medium Access Control Protocol for Satellite Constellations
Li, Brian
Satellite internet constellations have emerged as a promising solution for providing global internet connectivity, especially in regions underserved by terrestrial infrastructure. However, as user demand increases, especially in densely populated urban areas, existing Medium Access Control (MAC) protocols face significant scalability challenges and fail to take advantage of advanced antenna processing techniques, including phased array nulling, as well as capacity sharing via inter-satellite links.&#13;
We present both an offline linear program and a novel online greedy MAC protocol to assign satellite resources to users using either sequential service, capacity sharing, or interference-aware nulling. Our offline formulation provides an upper bound on system performance, and while our online protocol is sub-optimal compared to this optimum, it is designed to be implementable on a real-time system. Simulations demonstrate that incorporating nulling can increase effective capacity by up to 25 times, substantially boosting profit in high-demand scenarios. We further quantify the performance gap between the online protocol and the offline optimum under varying demand distributions, showing that our online approach achieves near-optimal results in low-peakiness settings and gracefully degrades under more extreme conditions. These results highlight the importance of spatial processing at the MAC layer and offer practical design insights for future satellite internet constellations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162751</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs</title>
<link>https://hdl.handle.net/1721.1/162750</link>
<description>Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs
Kong, Blisse
In recent years, large language models (LLMs) have become more ubiquitous in the workplace. In software engineering, they are often realized as “copilots" which produce code given a prompt or existing code. Programmers using these tools to increase their coding productivity need to be proficient in inspecting and in understanding these copilots’ outputs. As engineers incorporate these tools to accelerate their workflows, they have a parallel opportunity to accelerate learning new programming languages. This thesis presents a tutor interface where students with some programming experience in an origin language can learn a target language while practicing how to critically read and fix a copilot’s output to write correct, safe programs. This work also introduces the automatic generation of exercises teaching syntax and semantics on which a programmer experienced in the origin language but not the target language should focus.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162750</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic Physical Withholding of Renewable Energy&#13;
Generators</title>
<link>https://hdl.handle.net/1721.1/162749</link>
<description>Strategic Physical Withholding of Renewable Energy&#13;
Generators
Irvine, Paul M.
Renewable generators may have incentives to strategically withhold energy output in electricity markets, either to exercise market power or to avoid congestion pricing caused by transmission constraints. Although academic work often treats renewables as not downward dispatchable, renewable generators technically can, at least in principle, reduce their output by self-curtailing. This paper shows that a firm with a large, diverse portfolio could find it profit-maximizing to withhold renewables over conventional thermal generators once it accounts for constraints on ramp rates and minimum generation, as well as the costs associated with starting-up generators and the probability of detection on generator type by market monitoring authorities. Long-term forward contracts like pay-as-produced Power Purchase Agreements (PPAs) can blunt incentives to exercise market power by insulating individual generators from wholesale prices; however, since generators under PPAs typically bid into the wholesale market and influence competitive prices, they may actually encourage renewable withholding if contract prices are sufficiently low and the parent firm’s portfolio is exposed to wholesale prices. To screen for renewable withholding, this paper proposes three methods: (1) examining the distribution of aggregate output across export interfaces for suspicious bunching, (2) testing deviations from ex-ante forecasts, and (3) identifying the time intervals where generators encounter model structural changes compared to a benchmark presumed free of withholding. Together, this work prepares academics and regulators to more accurately model the behavior of renewable generators in electricity markets and to screen for potential market abuses.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162749</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Argos: Verifiable FHE Using Commodity Hardware</title>
<link>https://hdl.handle.net/1721.1/162748</link>
<description>Argos: Verifiable FHE Using Commodity Hardware
Jepsen, Fisher
We present Argos, a simple approach for adding verifiability to fully homomorphic encryption (FHE) schemes using trusted hardware. Traditional approaches to verifiable FHE require expensive cryptographic proofs, which incur an overhead of up to seven orders of magnitude on top of FHE, making them impractical. With Argos, we show that trusted hardware can be securely used to provide verifiability for FHE computations, with minimal overhead relative to the baseline FHE computation. An important contribution of Argos is showing that the major security pitfall associated with trusted hardware, microarchitectural side channels, can be completely mitigated by excluding any secrets from the CPU and the memory hierarchy. This is made possible by focusing on building a platform that only enforces program and data integrity and not confidentiality (which is sufficient for verifiable FHE, since all data remain encrypted at all times). All secrets related to the attestation mechanism are kept in a separate coprocessor (e.g., a TPM)—inaccessible to any software-based attacker. Relying on a discrete TPM typically incurs significant performance overhead, which is why (insecure) software-based TPMs are used in practice. As a second contribution, we show that for FHE applications, the attestation protocol can be adapted to only incur a fixed cost. Argos requires no dedicated hardware extensions and is supported on commodity processors from 2008 onward. Our prototype implementation introduces 3% overhead for FHE evaluation, and 8% for more complex protocols. In particular, we show that Argos can be used for real-world applications of FHE, such as private information retrieval (PIR) and private set intersection (PSI), where providing verifiability is imperative. By demonstrating how to combine cryptography with trusted hardware, Argos paves the way for widespread deployment of FHE-based protocols beyond the semi-honest setting, without the overhead of cryptographic proofs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162748</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework</title>
<link>https://hdl.handle.net/1721.1/162747</link>
<description>Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework
Kumar, Aryan
BuildIt allows users to write C++ programs that can execute in multiple stages, where the output of one stage is the program source for the next stage, ending with some final output produced. This is particularly useful for writing specialized code and generating code for domain-specific languages. While there are other approaches to multi-stage programming, BuildIt has several advantages: it takes a library-based approach (so it requires no modifications to the compiler and is thus highly portable), and it has excellent ease of use as all the user has to do is change the declared types of variables in their C++ program. The goal of this thesis is to further improve BuildIt’s ease of use by simplifying this step: in particular, by developing a tool that will automatically convert existing C and C++ programs to the BuildIt framework. We show how to use Clang tooling in conjunction with modifications to the Clang compiler to perform non-trivial modifications to source, namely type-modification, to automatically convert code to its unstaged BuildIt equivalent. As the unstaged BuildIt code can be specialized by staging certain variables, this tool will ultimately enable more easily staging and optimizing C/C++ repositories with the BuildIt framework.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162747</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients</title>
<link>https://hdl.handle.net/1721.1/162746</link>
<description>Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients
Jung, Emma Yejoo
Recent surges in the use of glucagon-like peptide-1 receptor agonists (GLP-1RA) have shown promise in reducing cardiovascular events and improving kidney function in patients with type 2 diabetes. Due to these hopeful improvements, kidney transplant recipients (KTRs) have started using GLP-1RA. However, their effects in KTRs remain largely unstudied in clinical studies. This thesis uses a large-scale Electronic Health Record (EHR) database to perform a retrospective cohort analysis to study the association between GLP-1RA use and kidney and cardiovascular outcomes amongst stable KTRs. Primary outcomes include all-cause mortality, major adverse kidney events (MAKE), and major adverse cardiac events (MACE). Among stable KTRs, GLP-1RA users show reduced risk for all-cause mortality (adjusted hazard ratio [aHR]: 0.45; 95% confidence interval [CI]: 0.32-0.62) and MAKE (aHR: 0.69; 95% CI: 0.58-0.81), but no significant difference for MACE (aHR: 0.84; 95% CI: 0.67-1.05). In addition, users show increased risk for irritable bowel syndrome (IBS) (aHR: 2.11; 95% CI: 1.07-4.15) and urinary tract infection (UTI) (aHR: 1.53; 95% CI: 1.27-1.85). These results indicate the potential of GLP-1RA to reduce mortality and adverse kidney outcomes and increase IBS and UTI in KTRs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162746</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians</title>
<link>https://hdl.handle.net/1721.1/162745</link>
<description>Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians
Kahler, Kailas B.
3D Gaussian Splatting (3DGS) is a technique for novel view synthesis, where images of a scene from a specific viewpoint are generated using images from different viewpoints, that has gained popularity for its reduced computational overhead, resulting in faster training and rendering times compared to other methods like Neural Radiance Fields (NeRFs). Its applications outside of strictly novel view synthesis have also been explored, with monocular simultaneous localization and mapping (SLAM) in robotics being an emergent application. However, because of limited on-board battery capacity, the computer hardware used in small robots is much less capable than the high-powered GPUs that the 3DGS algorithm was originally developed on, having both less compute and memory capacity and bandwidth. While there has been work developing specialized compute for the rendering pipeline of 3DGS, memory remains an obstacle to deployment. The Gaussian map can occupy from 1MB − 700MB in memory, which is both too large to store on-chip within micro-robots and such that moving Gaussians from memory to compute can dominate power consumption. While there has been prior work on algorithms for compressing Gaussian representations, they are not yet capable of running in real-time on the hardware present in these robots, as would be required for SLAM. Thus, this thesis explores the limits of these compression methods on current hardware, resulting in an optimized CUDA implementation with better than 100× the throughput of prior work and achieving real-time operation on workstation-class hardware. However, after concluding that custom hardware is necessary for further improvement, this thesis also presents a hardware accelerator that nears real-time compression performance within a reduced power budget, outperforming an NVIDIA Jetson Orin Nano with 64% higher throughput while using 1/16th of the multipliers and drawing 38% of the power when running at 100MHz on an AMD UltraScale+ FPGA.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162745</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Personalization of AI Tutor Based on Knowledge Graphs</title>
<link>https://hdl.handle.net/1721.1/162744</link>
<description>Personalization of AI Tutor Based on Knowledge Graphs
Huang, Sheng
Personalized tutoring, tailored to the specific knowledge and needs of individual students, has been shown to significantly enhance academic performance. Research by Schmidt and Moust, for example, highlights that tutors who engage with students on a personal level are more effective in guiding them toward higher academic achievement [1]. Inspired by this principle, the Axiom group at the MIT Media Lab developed an AI tutor for their Intro to Programming courses. The initial version of the tutor, RAGS, relied on analyzing past conversations between students and the tutor, as well as course content, to generate personalized responses. While this approach showed promise, it faced scalability challenges, such as the need to store an ever-growing volume of conversation history and the risk of exceeding token limits in prompt context windows. Additionally, the model occasionally struggled with over-generalization, particularly when responding to vague questions based solely on historical interactions. To address these limitations, this thesis introduces a new approach: a student knowledge graph. Rather than relying on an expanding archive of past conversations, the knowledge graph uses weighted nodes to represent a student’s understanding of each concept. A weight of -8 indicates subpar understanding, while a weight of 8 signifies mastery. After pre-processing the course data, the graph maintains a fixed size, eliminating the need for additional storage over time. This innovation solves two critical problems: &#13;
1. Scalability: By leveraging a fixed-size PostgreSQL database, the student knowledge graph avoids the storage challenges associated with saving endless conversation histories. &#13;
2. Improved Personalization: Instead of sifting through old conversations, the tutor uses concept weights to generate more precise and contextually relevant responses, even to vague questions. &#13;
Testing and evaluation of the implemented system demonstrate its effectiveness in both scalability and response quality. Over 60% of survey participants reported that the knowledge graph-enhanced tutor provided clearer and more relevant guidance, particularly when building on concepts they already understood. Additionally, over 80% of respondents noted improvements in the tutor’s ability to address weak areas and provide targeted practice, especially when preparing for quizzes or exams.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162744</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation</title>
<link>https://hdl.handle.net/1721.1/162743</link>
<description>SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation
Hadjiivanov, Michael D.
Large language models (LLMs) excel at generating fluent answers but are prone to hallucination when the prompt fails to anchor them to verifiable facts. Retrieval-augmented generation (RAG) mitigates this risk, yet existing graph-based retrievers either return bloated neighborhoods or incur prohibitive latency on large knowledge graphs (KGs). We introduce SPIRAL—Supervised Prior + Iterative Reinforcement with Adaptive Labelling—a lightweight two-stage framework that constructs compact, tree-shaped evidence subgraphs. This differs from previous work in its use of a trained, iterative policy network built on top of a prior over triples, delivering improved performance on multi-hop question answering tasks. Stage 1 trains a single-label GLASS-GNN on shortest-path heuristics, producing frozen, question-aware node embeddings at negligible runtime cost with significant local topology awareness around question entities. Stage 2 layers a GLASS policy—which re-labels the partial subgraph at each step—on top of these embeddings and optimizes it with proximal policy optimization. The policy scores only the 1-hop frontier, enabling sub-second inference even on million-edge graphs. On the multi-hop KGQA benchmark WebQSP, SPIRAL attains 0.95 triple recall and 0.97 answer recall while retrieving at most 50 triples—doubling the sampling efficiency of the strongest prior work. Coupled with Llama 3.1-8B, the retrieved trees boost Hit@1 by 2.5 % over SubgraphRAG. Ablation studies confirm that adaptive labels are critical for multi-hop reasoning. SPIRAL demonstrates that accurate and concise retrieval is achievable without resorting to massive models or expensive graph crawls, opening the door to real-time, KG-grounded assistants on modest hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162743</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL</title>
<link>https://hdl.handle.net/1721.1/162742</link>
<description>Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL
Choi, Justin J.
This work examines the current state of using large language models (LLMs) to solve Text-to-SQL tasks on databases in an enterprise setting. Benchmarks on publicly available datasets do not fully capture the difficulty and complexity of this task in a real-world, enterprise setting. This study examines the critical steps needed to work with enterprise data as well as using knowledge-injection to enhance the performance of LLMs on Text-to-SQL tasks. We begin by evaluating the baseline performance of LLMs on enterprise databases, revealing that a predominant source of failure stems from a lack of domain-specific knowledge. To improve performance, we explore knowledge-injection: the process of incorporating internal and external knowledge. Internal knowledge consists of database-specific information such as join logic, while external knowledge refers to institutional acronyms or group names. We present a hybrid retrieval pipeline that combines embedding and text based searching with LLM-guided ranking to supply models with relevant external knowledge during Text-to-SQL generation. We evaluate the impact of the knowledge-injection by testing the performance of LLMs on the table retrieval task after being augmented with appropriate external knowledge. We demonstrate that knowledge-injection significantly improves accuracy on table retrieval using BEAVER: an enterprise-level Text-to-SQL benchmark. Our findings highlight the importance of domain-specific knowledge-injection and retrieval augmentation in bringing LLMs closer to deployment in enterprise-grade database systems, as well as common failure modes that occur when executing enterprise Text-to-SQL.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162742</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators</title>
<link>https://hdl.handle.net/1721.1/162741</link>
<description>Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators
Chomphoochan, Thanadol
As single-thread performance plateaus, modern systems increasingly rely on parallelism to scale throughput. Yet, efciently managing concurrency—particularly in transactional systems—remains a major bottleneck. This thesis explores the feasibility of accelerating transaction scheduling via hardware, leveraging FPGAs to ofoad scheduling logic from the CPU. We revisit Puppetmaster, a hardware transaction scheduler, and present a redesigned architecture emphasizing deployability, modularity, and evaluation. We implement both an optimized software baseline and a Bluespec-based hardware design, evaluating their performance across synthetic YCSB-style workloads with varying contention levels. Our hardware prototype demonstrates competitive throughput, achieving over 90% of peak throughput even under high-contention workloads. These results validate the potential of transaction scheduling as a target for hardware acceleration and highlight promising directions for future hybrid hardware-software concurrency-control systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162741</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection</title>
<link>https://hdl.handle.net/1721.1/162740</link>
<description>Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection
Edelman, Jonathan
This thesis investigates the behavior of carbon dioxide flow in porous media through high-fidelity computational modeling, with a specific focus on the impact of the Span-Wagner equation of state (EOS). Accurate modeling of CO₂ transport in subsurface environments is essential for applications such as carbon capture and storage (CCS). We model the entire flow from injection, down throughout a vertical pipe and into a porous reservoir. To this end, we utilize the MOOSE (Multiphysics Object-Oriented Simulation Environment) framework developed by Idaho National Laboratory to perform finite element simulations. A key contribution of this work is the successful coupling of a porous rock domain with a one-dimensional pipe flow simulation in Julia, enabling a broader representation of injection scenarios. The study examines how the thermodynamic accuracy of the Span-Wagner Equation of State influences flow characteristics, in comparison to the Ideal Gas Equation of State. Through a series of coupled pipe-reservoir simulations, we assess variations in pressure and density as CO₂ is injected from the pipe into the porous medium. The model can detect phase change conditions, allowing us to predict the maximum mass flux that can be achieved below the liquefaction threshold, as defined by the binodal curve in the CO₂ phase diagram at a given temperature. The results highlight the importance of EOS selection in predicting multiphase flow behavior, especially under conditions relevant to geological storage. Furthermore, we find that the Ideal Gas EOS underpredicts injection rates under the same conditions. This integrated modeling approach advances the understanding of thermodynamic effects in coupled subsurface flow systems and supports the development of reliable tools for large-scale carbon storage applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162740</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data</title>
<link>https://hdl.handle.net/1721.1/162739</link>
<description>Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data
Dahleh, Omar
This thesis presents a novel approach to the de-identification of clinical notes from Organ Procurement Organization (OPO) records, leveraging advanced natural language processing (NLP) methodologies. Specifically, we employ in-context learning using large language models (LLMs) to effectively identify and remove protected health information (PHI), aiming to maintain high data utility post-redaction. Our work systematically evaluates the performance of the LLM-based method against established baseline techniques, including traditional Named Entity Recognition (NER) and rules-based systems. Through a slew of experiments, we assesses the strengths and limitations of each method regarding precision and recall. This work will contribute to a uniquely extensive dataset, comprising millions of de-identified OPO clinical notes, which will facilitate ethical healthcare research and enhance compliance with contemporary data protection standards. Ultimately, this dataset holds significant potential for improving processes and outcomes within the field of organ donation and procurement.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162739</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Novel Energy Catalyst Discovery Using Automation, Active Learning, and AI</title>
<link>https://hdl.handle.net/1721.1/162738</link>
<description>Accelerating Novel Energy Catalyst Discovery Using Automation, Active Learning, and AI
Ren, Zhichu
The discovery of novel energy catalysts is a critical challenge in the field of materials science. Traditional methods for materials discovery are labor-intensive and time-consuming, hindering the rapid development of new catalysts. To address this issue, we introduce a comprehensive approach that integrates automation, active learning, and artificial intelligence (AI) to accelerate the discovery process.&#13;
&#13;
Our approach introduces the Copilot for Real-world Experimental Scientist (CRESt) system, which combines a large multimodal model (LMM) with an active learning-guided robotic system. CRESt streamlines the workflow of composition selection, high-throughput materials synthesis, electrochemical screening and characterization for the optimization of high-entropy alloy catalysts. The system allows researchers, regardless of their programming skills, to interact with the robotic platform using voice commands, making it highly accessible and user-friendly.&#13;
&#13;
We demonstrate the effectiveness of our approach by experimentally exploring over 700 chemistries and 1300 samples. The optimized 8-dimensional alloy (Pd-Pt-Cu-Au-Ir-Ce-Nb-Cr) achieved approximately 10 times the cost-specific performance of commercial catalysts for the direct formate fuel cell. This breakthrough highlights the potential of our approach to accelerate the discovery of novel energy catalysts across various domains.&#13;
&#13;
Furthermore, we discuss the challenges and considerations associated with implementing active learning in real-world experiments. We provide guidance on addressing model-centric and data-centric issues, such as model customization and data irreproducibility, to ensure the successful application of active learning in materials research projects.&#13;
&#13;
Looking ahead, we explore the role of human experimentalists in the era of AI-driven discovery. While AI and automation are poised to transform many aspects of experimental research, we argue that human experimentalists remain irreplaceable for now. Our ability to exercise critical thinking and engage in complex real-world interactions sets us apart from abiotic intelligence. However, as AI becomes more deeply integrated into research practices, the experimental landscape is bound to undergo significant changes.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162738</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient ML Inference via Matrix-Vector Approximations</title>
<link>https://hdl.handle.net/1721.1/162737</link>
<description>Efficient ML Inference via Matrix-Vector Approximations
Li, Daniel D.
Efficient inference is a growing priority in deep learning, where large model sizes and increasing deployment demands pose challenges for latency, memory, and energy usage. This thesis presents a unified framework for evaluating approximation methods that accelerate inference by modifying weight matrices. We model each method as a function ƒ_c(A) that approximates a weight matrix A under a compression rate c, and assess its impact on both matrix–vector accuracy and downstream task performance. We conduct empirical evaluations across two representative models, AlexNet on CIFAR10 and DistilBERT on AG News, comparing quantization, sparsification, and low-rank approximations. Our analysis spans four perspectives: (1) how different methods trade off ℓ₂ error and compression, (2) how weight statistics and input distributions shape error, (3) how well ℓ₂ error predicts classification accuracy, and (4) how idealized compression differs from real memory savings. We find that sparsification offers a strong trade-off between storage and accuracy, particularly because it preserves task-relevant structure in the weights. We also show that ℓ₂ error is not always a reliable proxy for accuracy, especially when input data lie on low-dimensional manifolds. These results suggest that approximation quality must be evaluated not only by global distortion metrics, but also by how the method interacts with model structure and input distributions. Our findings offer practical guidance for deploying efficient deep learning models and shed light on how compression affects performance in real-world settings.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162737</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning</title>
<link>https://hdl.handle.net/1721.1/162736</link>
<description>A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning
Lee, Jimin
Effective reasoning often requires more than text or language. It requires visualizing, drawing, gesturing, and interacting for both humans and artificial intelligence (AI). Specifically in educational subjects, such as geometry and graphs, visual tools like auxiliary annotations and drawings can greatly help students understand abstract theories. This thesis explores and suggests how multimodal interaction between humans and AI helps humans engage with the system more naturally and effectively, leading to improved problem-solving in mathematical settings. Recent large multimodal models (LMMs) have the ability to facilitate collaborative reasoning by supporting textual, visual, and interactive inputs, diversifying methods of communication between humans and AI. Utilizing such advancements, this thesis also dives into the development of Interactive Sketchpad, a tutoring system that combines language-based explanations with interactive visualizations to enhance learning. It also reviews findings from user studies with Interactive Sketchpad, demonstrating that multimodality contributes to user task comprehension and engagement levels. Together, these contributions can reframe the role of AI in education as a visual and interactive collaborator that supports deeper reasoning rather than simply providing answers. Furthermore, this work demonstrates the potential of multimodal human-AI systems in fostering engagement and scaling personalized, visual learning across domains.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162736</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast and Scalable Subgraph Learning</title>
<link>https://hdl.handle.net/1721.1/162735</link>
<description>Fast and Scalable Subgraph Learning
Liang, Derrick
Graph Neural Networks (GNNs) are a powerful framework for learning over structured data, enabling predictive modeling across domains such as bioinformatics, recommendation systems, and financial fraud detection. While scalable systems like SALIENT++ have advanced the training of node-level GNN tasks at industrial scale, they do not support an emerging class of workloads: subgraph classification, which is increasingly common in real-world applications. Prior implementations address this gap by modifying both the data pipeline and the model architecture—but at the cost of composability, creating tightly coupled systems that slow further development. This thesis introduces MOSAIC, a lightweight data transformation that reframes subgraph classification as nodewise prediction by augmenting the graph with representative nodes. This approach enables direct compatibility with SALIENT++ and other nodewise systems while decoupling workload format, dataloader design, and model architecture. I demonstrate that MOSAIC enables modular reuse of architectures like GraphSAGE and subgraph-aware components from GLASS, while preserving SALIENT++’s system-level scalability. On the large-scale Elliptic2 dataset, this integration reduces training memory usage by 2.8× and epoch runtime from over 90 minutes to 0.4 seconds—while improving classification performance. I implement MOSAIC as a succinct (&lt;100-line), reusable preprocessing script, enabling integration of the GLASS architecture into SALIENT++ in &lt;10 lines of code, compared to Wang et al.’s tightly coupled 500+ line design. These results highlight the feasibility of scalable, composable experimentation for subgraph learning tasks in high-performance GNN systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162735</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians</title>
<link>https://hdl.handle.net/1721.1/162734</link>
<description>Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians
Lam, Jordan
Image-based 3D scene reconstruction continues to be a challenge as it involves solving both the sufficient 3D representation problem and the 3D reconstruction itself. One approach to tackle the rendering problem is 3D Gaussian Splatting because of its potential to produce fast and realistic renders via 3D Gaussian representation. With many applications in the entertainment industry, there is motivation in using 3D Gaussian Splatting for not only reconstructing 3D dynamic scenes but also editing them. However, extending the problem to dynamic 3D scenes proves to be a challenging task as it involves discerning the correct representation of a 3D scene while maintaining the capability to render in real time. State-ofthe-art methods have proposed methods that reconstruct dynamic scenes or edit static scenes, but the problem of editing dynamic scenes is still underexplored. This thesis analyzes the feasibility of editing semantically trained Gaussians for dynamic 3D scene editing. By training 3D Gaussians to represent the semantics across the time steps of a dynamic 3D scene, these primitives can be combined with an image editing pipeline to perform real-time, realistic 3D scene editing. Results show that editing segmented 3D Gaussians produces higher-quality and efficient renders as compared to editing without segmentation. However, when evaluated for mainstream applications, results show the impracticality of this pipeline and draw focus to memory and editing limitations that need to be further researched for future advances in 3D Gaussian Splatting.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162734</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized AI for Methylation Data with Applications to&#13;
Precision Health</title>
<link>https://hdl.handle.net/1721.1/162733</link>
<description>Decentralized AI for Methylation Data with Applications to&#13;
Precision Health
Jamee, Mehrab S.
Advances in precision health rely on integrating large-scale genomic data to identify biomarkers and predict health outcomes. However, sharing sensitive patient data between institutions like hospitals poses significant privacy and security challenges, limiting collaboration and the development of robust machine learning models. This thesis proposes a decentralized artificial intelligence framework for analyzing DNA methylation data, enabling institutions to collaboratively train models without exchanging sensitive information. By taking advantage of generative deep learning techniques and federated learning paradigms, the framework aims to impute missing biomarkers in fragmented datasets and improve the accuracy of downstream predictive tasks, like predicting chronological age, mortality, and cancer data. Two intermediate models are implemented and evaluated in this thesis. The first predicts age from DNA methylation data, and can be used for evaluation of the imputation model. The second is an imputation model that uses a conditional autoencoder architecture to reconstruct missing biomarker data in clinical datasets, which is designed to take advantage of contextual methylation embeddings, made available by recently published pretrained epigenomics foundation models. This work seeks to advance the use of decentralized AI in epigenomics, with the ultimate goal of improving personalized healthcare while preserving patient privacy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162733</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Smallholder Field Delineation</title>
<link>https://hdl.handle.net/1721.1/162732</link>
<description>Exploring Smallholder Field Delineation
Janjigian, Lily T.
Accurate crop field delineation from satellite imagery is a critical component of agricultural monitoring. However, most existing models are developed and evaluated in large-scale, industrial agricultural regions, where field boundaries are relatively regular and high-quality annotated data is more readily available. In contrast, smallholder regions—where fields are smaller, more irregularly shaped, and often lack precise geospatial labels—remain underrepresented in both data and model performance. This thesis investigates model architectures, loss functions, and learning paradigms for improving segmentation performance in smallholder settings. Using datasets from Austria, India, and Rwanda, we evaluate several model configurations including ResUNet++ with Dice+BCE and Tanimoto+BCE losses, a meta-learned ResUNet++ using Model-Agnostic Meta-Learning (MAML), and SAM2 ViT-H, a large vision transformer released by Meta, evaluated in a zero-shot setting. We introduce a data processing pipeline that converts vector field boundaries from the FTW dataset into highresolution image–mask pairs suitable for supervised learning. Quantitative and qualitative results reveal that models trained on industrial-scale data perform poorly in smallholder regions without adaptation. SAM2 exhibits strong zero-shot performance, especially on larger fields, while ResUNet++ models trained directly on India perform more consistently across small-to-medium sized fields. MAML yielded underwhelming performance under resource constraints, highlighting the need for further tuning. These findings underscore the importance of geographically diverse, well-aligned training data and support the case for developing globally representative agricultural segmentation datasets.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162732</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks</title>
<link>https://hdl.handle.net/1721.1/162731</link>
<description>You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks
Jones, John M.
Wildfires represent a growing global threat that requires rapid detection and response to minimize environmental damage, economic losses, and human casualties. In the United States, California stands out as a particularly common wildfire hot spot. Recent fire seasons have shattered historical records and been particularly devastating. This work investigates innovative methods for classifying and localizing wildfires through terrestrial cameras positioned on elevated terrain, aimed at improving early detection capabilities and response times while maintaining computational efficiency and reliability for the U.S. Space Force in Southern California. We present YOL2, a novel ensemble approach that combines a fine-tuned ConvNeXt Convolutional Neural Network incorporating a Dynamic Tanh normalization layer with a fine-tuned YOLO11 model for precise localization. Using a comprehensive dataset of 33,636 time-sequenced images from terrestrial cameras across the United States and Europe, our system achieves 98% fire detection accuracy and 55% localization mean average precision [50:95]. The implementation of Dynamic Tanh normalization—applied for the first time in wildfire detection—enhances computational efficiency without sacrificing performance. The images used capture the spread of incipient fires over time, with most containing bounding boxes denoting the approximate location of fire, allowing our system to identify fires quickly while minimizing false positives. Importantly, our spatiotemporal system operates effectively without requiring individual models to rely on multiple time steps as input, enabling modular component replacement and adaptation. The use of pan, tilt, and zoom cameras in concert with our YOLO model provides a more computationally efficient confirmation of fire than alternative methods, showing that extracting better results from less information is possible. Beyond wildfire applications, the YOL2 ensemble methodology demonstrates profound implications for remote sensing more broadly. This work establishes a foundation for highly efficient visual detection systems applicable across numerous domains requiring rapid and accurate object identification and localization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162731</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards transparent representations: on internal structure and external world modeling in LLMs</title>
<link>https://hdl.handle.net/1721.1/162730</link>
<description>Towards transparent representations: on internal structure and external world modeling in LLMs
Hariharan, Kaivalya
Large language models (LLMs) generalize far beyond their training distribution, enabling impressive downstream performance in domains vastly different from their pretraining distribution. In this thesis, we develop a data-centric view on machine learning. We suggest that the deep generalization of LLMs is best understood through studying the relationships between the four fundamental components of this data generalization: pretraining data, test-time inputs, model outputs, and internal structure. Of these, we present two full research studies characterizing test-time inputs and internal structure. Chapter 1 develops the data-centric view of machine learning, and outline the thesis. Chapter 2 presents Breakpoint, a method of generating difficult coding tasks for models at a large scale that attempts to disambiguate the factors that make problems at test-time difficult. Chapter 3 analyzes the structure of gradient-based jailbreaks in LLMs. We argue that even though GBJs are more out of distribution than even random text, they induce a low-rank, structured change in models. Finally, Chapter 4 discusses the recent rise of reasoning models and proposing some lines of future work in the data-centric view towards developing more robust understanding of LLMs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162730</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools</title>
<link>https://hdl.handle.net/1721.1/162729</link>
<description>Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools
Hong, Stephen S.
Optical tracking technology in sports has advanced rapidly in recent years, enabling new opportunities for data-driven analysis and tools to enhance the game. This study presents a framework for processing and analyzing a new skeletal tracking dataset collected from NBA basketball games. The methodology includes biomechanical joint validation, anomaly detection, and region-based consistency analysis to assess the integrity of player motion data. Joint movement anomalies are used to detect tracking errors, while court region and stadium-level evaluations help identify where the optical tracking system may be underperforming. These patterns can guide data providers toward specific areas that require refinement, offering a clearer starting point for improving system accuracy. After cleaning the dataset of 117 NBA games, two action recognition models—a transformer-based model and a temporal graph neural network—are implemented to classify player actions, specifically dribbling, passing, shooting, and rebounding, from sequences of skeletal tracking frames. The objective is to establish a baseline for developing tools to support officiating decisions in the NBA. By leveraging spatiotemporal representations of joint motion, this work improves the reliability of skeletal tracking data and contributes to the advancement of automated decision support in professional sports officiating.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162729</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training</title>
<link>https://hdl.handle.net/1721.1/162728</link>
<description>Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training
Erives, Ezra
Sampling from distributions whose density is known up to a normalizing constant is an important problem with a wide range of applications including Bayesian posterior inference, statistical physics, and structural biology. Annealing-based neural samplers seek to amortize sampling from unnormalized distributions by training neural networks to transport a family of densities interpolating from source to target. A crucial design choice in the training phase of such samplers is the proposal distribution by which locations are generated at which to evaluate the loss. Previous work has obtained such a proposal distribution by combining a partially learned vector field with annealed Langevin dynamics. However, isolated modes and other pathological properties of the annealing path imply that such proposals achieve insufficient exploration and thereby lower performance post training. In this work we extend existing work and characterize new families of proposals based on controlled Langevin dynamics. In particular, we propose continuously tempered diffusion samplers, which leverage exploration techniques developed in the context of molecular dynamics to improve proposal distributions. Specifically, a family of distributions across different temperatures is introduced to lower energy barriers at higher temperatures and drive exploration at the lower temperature of interest. We additionally explore proposals based on Langevin dynamics involving non-Newtonian kinetic energies. We empirically validate improved sampler performance driven by extended exploration.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162728</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web</title>
<link>https://hdl.handle.net/1721.1/162727</link>
<description>From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web
Luchko, Yaro
This thesis presents tools and ideas for prototyping and exporting collaborative digital music instruments (DMIs) on the web, the primary purpose of which is to lower the barrier to making music and to enable easier collaboration. This is done in the context of the Creativitas website, which has become a tool of the MIT 21M.080 "Introduction to Music Technology" class to learn about music technology and audio on the web, and a tool for FaMLE (the Fabulous MIT Laptop Ensemble) to use in live performances. The website allows creators to execute code within an editor code box and partake in a practice known as live coding, ultimately creating both sound and visuals. Audio is primarily created with the Tone.js interactive web audio framework, and visuals are drawn on a provided canvas using p5.js. This thesis extends the Creativitas website by providing functionality for exporting the written code as a standalone website. The exported standalone websites serve as DMIs, with standard controls such as volume, tempo, and start and stop buttons. Furthermore, we discuss and implement strategies for synchronizing timing and instrument values. This includes state-of-the-art strategies, as well as ideas for creating extendable interfaces that can include more strategies as they are developed. We end with two examples of exported DMIs, which can be effectively used in performances.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162727</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning</title>
<link>https://hdl.handle.net/1721.1/162726</link>
<description>Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning
Lei, Si Liang
Background. Programmable expressive features—such as speech, facial expressions, and chatbot-style dialogue—are often promoted as tools to enhance engagement in educational robotics. While prior research shows benefits in socially-oriented tasks like storytelling or group collaboration, it remains unclear how student-controlled expressive blocks affect learning when the task itself is non-social. This study isolates the impact of such features in a context where expressiveness is not instructionally required. Method. We conducted a controlled, two-cohort study with 41 middle school students (ages 10–12) during a one-day AI-and-robotics workshop using the Doodlebot platform. Students in the experimental group had access to optional blocks enabling the robot to speak, emote, and use GPT-based responses. These features were hidden from the control group. All participants completed identical programming tasks (e.g., maze navigation, visual classification) that did not require social interaction. Data sources included pre/post surveys, facilitator notes, and student code. We applied the Mann–Whitney U test [1, 2] and reflexive thematic analysis [3, 4] to examine outcomes. Results. The expressive condition showed no significant gains in programming confidence or peer trust, but performed significantly worse on the post-workshop concept quiz (p = .007, r = .41). Qualitative data revealed that students in this group often used expressive blocks for entertainment rather than learning, leading to distraction, off-task behavior, and increased reliance on adult facilitation. Contributions. This study contributes (i) empirical evidence on the limitations of robot expressiveness in non-social learning contexts, (ii) a mixed-methods protocol for analyzing classroom robot deployments, and (iii) design guidance for aligning robot behavior with pedagogical intent. Implications. Expressiveness in educational robots should be contextually deployed—not assumed beneficial by default. In technical, goal-driven tasks that do not involve social reasoning, unscaffolded expressiveness may introduce cognitive overhead or divert attention. We propose a “dial-a-sociality” model, where robot behavior can be flexibly tuned to match the demands of the learning environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162726</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar</title>
<link>https://hdl.handle.net/1721.1/162724</link>
<description>High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar
Kuka, Adrian
The past few years have witnessed growing interest in using millimeter-wave signals for non-line-of-sight (NLOS) perception tasks, with applications in robotics, augmented reality, and smart-homes. However, existing systems suffer from a lack of large mmWave datasets, resulting in limited accuracy and generalizability compared to their line-of-sight, camera-based counterparts. We present the design, implementation, and evaluation of mmSim, a new, high-speed millimeter-wave (mmWave) simulator capable of producing large synthetic datasets to help drive the field of mmWave-based NLOS perception. mmSim introduces two main contributions to improve the speed over existing mmWave simulators. First, it pre-selects areas of the object, which will produce reflections towards each simulated antenna location, allowing it to minimize future computation. Second, it introduces a coarse-to-fine approach to allow early, less critical steps to operate at lower resolutions, while maintaining the high resolution in later steps required for high-accuracy images. These techniques, combined with other performance optimizations, allow mmSim to achieve an over 24x improvement in speed over state-of-the-art mmWave simulators.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162724</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards AI Safety via Interpretability and Oversight</title>
<link>https://hdl.handle.net/1721.1/162723</link>
<description>Towards AI Safety via Interpretability and Oversight
Kantamneni, Subhash
In this thesis, we advance AI safety through mechanistic interpretability and oversight methodologies across three key areas: mathematical reasoning in large language models (LLMs), the validity of sparse autoencoders, and scalable oversight. First, we reverse-engineer addition within mid-sized LLMs and discover that LLMs represent numbers as helices. We demonstrate that LLMs perform addition via the manipulation of these helices using a "Clock" algorithm, providing the first representation-level explanation of mathematical reasoning in LLMs, verified through causal interventions on model activations. Next, we rigorously evaluate sparse autoencoders (SAEs), a popular interpretability tool, by testing their effectiveness on the downstream task of probing. We test SAEs under challenging probing conditions, including data scarcity, class imbalance, label noise, and covariate shift. While SAEs occasionally outperform baseline methods, they fail to consistently enhance task performance, underscoring a potentially critical limitation of SAEs. Lastly, we introduce a quantitative framework to evaluate scalable oversight - a promising idea where weaker AI systems supervise stronger ones - as a function of model intelligence. Applying our framework to four oversight games ("Mafia," "Debate," "Backdoor Code," and "Wargames"), we identify clear scaling patterns and extend our findings through a theoretical analysis of Nested Scalable Oversight (NSO), deriving conditions for optimal oversight structures. Together, these studies advance our understanding of AI interpretability and alignment, providing insights and frameworks to progress AI safety.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162723</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metagradient Descent: Differentiating Large-Scale Training</title>
<link>https://hdl.handle.net/1721.1/162722</link>
<description>Metagradient Descent: Differentiating Large-Scale Training
Chen, Benjamin
A major challenge in training large-scale machine learning models is configuring the training process to maximize model performance, i.e., finding the best training setup from a vast design space. In this work, we unlock a gradient-based approach to this problem. We first introduce an algorithm for efficiently calculating metagradients -- gradients through model training -- at scale. We then introduce a "smooth model training" framework that enables effective optimization using metagradients. With metagradient descent (MGD), we greatly improve on existing dataset selection methods, outperform accuracy-degrading data poisoning attacks by an order of magnitude, and automatically find competitive learning rate schedules.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162722</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A simplified approach to calculating personalized estimates for electric vehicle charging delays</title>
<link>https://hdl.handle.net/1721.1/162721</link>
<description>A simplified approach to calculating personalized estimates for electric vehicle charging delays
Chen, Helen
In the past decade, electric vehicles (EVs) have gained traction as a cleaner alternative to internal combustion engine vehicles, commonly referred to as gas-powered vehicles. To promote EV adoption, the government has implemented various regulations and incentives to support the transition to cleaner transportation. However, EV adoption in the United States has progressed more slowly than expected, with EVs accounting for less than 10 percent of new vehicle sales in 2023. Recent surveys indicate that a significant barrier is the perceived inconvenience and uncertainty surrounding EV charging, particularly the additional time required to charge during active use, which we call charging delay. Currently, there exist some models for estimating these charging delays, but these models require users to input a significant amount of information, such as their daily driving schedules, locations of charging stations, and exact distances of trips taken each year, which many users may not even remember. These more complex models are likely to overwhelm users, especially those who may be entirely new to EVs. To fill this gap, this thesis introduces a simplified model for estimating personalized annual EV charging delay using a set of easy-to-provide inputs, including typical driving behavior and access to home and work charging. The model logic captures delay from both routine usage, such as weekly driving patterns or typical trips, and occasional, high-energy long-distance trips, which, while not routine, are still important to account for. For weekly trips, the model considers four scenarios based on combinations of home and work charging access to determine driving and charging schedules. For long-distance travel, the model uses data from the 2022 National Household Travel Survey (NHTS) and performs multiple iterations of bootstrap resampling to create synthetic distributions of long-distance trips within a year. Data related to individual routine vehicle usage and charging delay is unavailable, so we are unable to validate the model’s performance through accuracy calculations. Instead, we performed a one-at-a-time sensitivity analysis to better understand how charging delay is affected by different factors. We found that access to private charging, such as home or work charging, improves charging delay robustness for regular weekly trips, with the exception that relying solely on work charging on workdays can cause stepwise increases in non-workday delays. Additionally, long-distance trip delays are no affected by private charging access and follow a stepwise pattern based on vehicle range. In general, the simplified approach presented in this thesis offers a more accessible way for current and prospective EV owners to clearly understand their own expected experience of EV ownership.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162721</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards</title>
<link>https://hdl.handle.net/1721.1/162720</link>
<description>The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards
Li, Zhening
Skills are temporal abstractions that are intended to improve reinforcement learning (RL) performance through hierarchical RL. Despite our intuition about the properties of an environment that make skills useful, there has been little theoretical work aimed to characterize these properties precisely. This work studies the utility of skills in sparse-reward environments with a discrete state space and finite action space. We show, both theoretically and empirically, that RL performance gains from skills are worse in environments where successful trajectories are less compressible. In environments with a highly incompressible distribution of successful trajectories, using unexpressive skills such as macroactions will provably worsen RL performance. We hope our findings can guide research on automatic skill discovery and help RL practitioners better decide when and how to use skills.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162720</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Supervised ECG Learning for Multimodal Clinical Tasks</title>
<link>https://hdl.handle.net/1721.1/162719</link>
<description>Self-Supervised ECG Learning for Multimodal Clinical Tasks
Chen, Peilin
We present a multimodal clinical AI framework that integrates time series, images, and text to support robust diagnostic reasoning across diverse input combinations. We first introduce ECG-JEPA, a self-supervised encoder pretrained on multiple ECG datasets to learn generalizable time series representations. This unimodal pretraining improves ECG classification, achieving a 23-point AUC gain on the underrepresented Ga dataset. We then align and fuse these ECG embeddings with chest X-rays and EHR text using a vision–language model backbone, enabling end-to-end multimodal inference. Our results show that incorporating ECG signals meaningfully improves diagnostic performance, highlighting the value of multitask time series pretraining and modular fusion for clinical AI.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162719</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)</title>
<link>https://hdl.handle.net/1721.1/162718</link>
<description>A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)
Huang, Roderick W.
The use of Mean-Variance Portfolio Optimization (MVO) in Modern Portfolio Theory (MPT) has been a long-standing method to guide investment decisions for market-traded assets like stocks and bonds. Recent research shows that portfolio optimization developed using MPT could prove useful in investment decisions for technology projects. Traditionally, empirical data from past projects and statistically driven technology trends are used to predict the risk-return model necessary for MPT. This thesis introduces a new methodology, Optimizing Portfolios in Technologies Investments Methodology with Hierarchy (OPTIM-H), which extends MPT to make investment decisions within a hierarchical organizational structure of technology projects. An integrated dataset was developed to demonstrate this methodology, combining 19,000 data records from Techport and Small Business Innovation Research (SBIR) datasets. The dataset captures investment trends and maturity pathways across 17 taxonomy areas, revealing that most projects begin at Technology Readiness Levels (TRLs) 2–4, with average funding amounts near \$300,000. OPTIM-H effectively distinguishes between broader technology groups and their subcategories, showing the impact of community interest on investment decisions. Furthermore, this work investigates k-means clustering as a tool for classifying technology projects for targeted investment, with the analysis identifying seven clusters and achieving a mean utility score of 0.595 with a standard deviation of 0.651.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162718</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Canvas with a Large-Scale Social Annotation Platform</title>
<link>https://hdl.handle.net/1721.1/162717</link>
<description>Integrating Canvas with a Large-Scale Social Annotation Platform
Heiberger, Henry R.
The last decade has seen a growing interest in the use of collaborative annotation systems, educational tools that allow multiple users to asynchronously comment, highlight, and discuss digital content directly on the source material, transforming traditional classroom readings into a more engaging group activity. Originally developed by MIT CSAIL’s Haystack Group in 2012 under the direction of Professor David Karger, Nota Bene (NB) is a particular collaborative annotation tool that allows students to have annotated online discussions in the margins of textbooks, papers, and even webpages [1]. Though various studies have already proven its ability to succeed in a classroom setting, conversations with key stakeholders have revealed that the tool is missing a key feature found in many other popular collaborative annotation solutions: integration with the Canvas learning management system (LMS) [1–3]. Thus, this work sought to integrate the classroom management features that Canvas provides into the NB platform by supporting Canvas account linking, class importation and roster synchronization, and automatic grade uploading. By doing this, we hoped to improve NB’s quality as a classroom tool, enhancing its value to institutions, encouraging its wider adoption across the academic landscape, and aligning with a much broader trend of creating more integrated, efficient, and user-friendly educational technology solutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162717</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples</title>
<link>https://hdl.handle.net/1721.1/162716</link>
<description>On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples
Hernandez, Adriano
Artificial Intelligence (AI) and large language models (LLMs) not only present a challenge for adversarial robustness, but also the natural emergence of unwanted capabilities. Current approaches to safeguarding AI and LLMs predominantly rely on explicitly restricting known instances of these. However, this places a burden on model developers, because they cannot anticipate all the potential attacks and undesirable capabilities. To solve this problem, we leverage interdisciplinary knowledge. In the field of information security, the principle of least privilege provides guidance on how to defend from unknown threats. In AI, the principle could be implemented by ensuring that developers specify the knowledge and capabilities an AI system should retain, restricting all others by default. We call this application of the principle of least privilege, passive scoping. Our thesis makes two claims: &#13;
1. We argue that (a) passive scoping mitigates concerns about adversarial robustness and loss of control of AI systems and (b) passive scoping to edit the weights and activations at post-training time is underexplored by the literature. &#13;
2. Of possible approaches, our sparse autoencoder (SAE) filters can implement this underexplored type of passive scoping. They increase safety relative to LoRA finetuning and prompt engineering, but leave room for improvements. &#13;
The thesis is structured as follows: &#13;
1. Chapter 2 elucidates the challenges with adversarial robustness and loss of control risk. Chapter 3 puts forward a conceptual argument for the benefits of passive scoping. Later, it analyzes the extent to which passive scoping has been attempted. These two chapters work together to defend claims 1a and 1b. &#13;
2. Chapter 4 defines our optimization problem. Chapter 5 defines our experimental methodology and metrics. These two define our success criteria for claim 2. Chapter 6 finalizes our defense of claim 2 based on our results. &#13;
3. Chapter 7 explores related work, Chapter 8 engages in a broader discussion, and chapter 9 summarizes the contributions of this thesis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162716</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch</title>
<link>https://hdl.handle.net/1721.1/162715</link>
<description>Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch
Huang, Alexis
As generative AI tools become increasingly prevalent in young people’s lives, these technologies have a growing influence over the way that children learn. While much of the early work at the intersection of AI and education has focused on the development of intelligent tutoring systems designed to deliver content more efficiently, this thesis explores how generative AI might be used to support the creative learning process by sparking curiosity, encouraging exploration, and helping young people express themselves creatively. In this thesis, I explore ways of integrating generative AI with Scratch, the world's largest programming community for children, while remaining aligned with the core values of Scratch: creativity, playfulness, and self-expression. I designed three tools that extend the Scratch ecosystem: Scratch Connect, which explores using generative AI to help Scratchers discover projects that inspire them to create while opening the black box of recommendation systems; scrAItch, which investigates how people can iterate with generative AI by using text-based inputs to create and tinker with Scratch projects; and Scratch Spark, which reimagines the new learner experience by using generative AI to help users create personally meaningful “spark projects.” This thesis describes the process of imagining, creating, and reflecting on these tools, including many of the challenges and tensions that we encountered along the way. I discuss observations and feedback from creative workshops with young people, and conclude by reflecting on open questions and opportunities for future work in designing generative AI tools that support creative learning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162715</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators</title>
<link>https://hdl.handle.net/1721.1/162714</link>
<description>Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators
Forsythe, Eyan
Analog accelerators can enable energy-efficient and high-throughput deep neural network (DNN) computations by computing in memory. Unfortunately, device and circuit nonidealities in these accelerators, such as noise and quantization, can also lead to low DNN inference accuracy due to computation errors arising from these non-idealities. These errors are largely a function of both the choice of DNN workload and different hardware design choices, such as circuit topology and DNN operand encoding. Different hardware design choices can affect the energy, throughput, and area of the system, so it is important to understand how these design choices interact with DNN inference accuracy. However, there is a lack of a systemic understanding of how each of these hardware design decisions affects accuracy and how they interact with other design decisions. To address these issues, we model how hardware design choices can lead to analog errors such as noise and quantization. Then, we explore these errors affect inference accuracy in analog accelerators and how tradeoffs can be made between inference accuracy, energy efficiency, area, and throughput. We find that analog errors generated from hardware design decisions can generate different amounts of accuracy loss depending on which layer in a DNN is subject to these analog errors. This leads to the structure of the DNN having a significant impact in how hardware design choices affect DNN inference accuracy, especially with respect to the individual layers of a DNN. We use knowledge of the relationships between device and circuit non-idealities to improve the accuracy of published analog accelerators and analyze the energy and area costs of the increased accuracy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162714</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications</title>
<link>https://hdl.handle.net/1721.1/162713</link>
<description>Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications
Guobadia, Omozusi E.
The advancement of brain-machine interfaces (BMIs) requires neural signal acquisition systems that are capable of resolving both fast, low-amplitude action potentials (APs) and slow, higher-amplitude local field potentials (LFPs) under stringent power and area constraints. This thesis presents the design and simulation of a high-resolution, low-power successive approximation register (SAR) analog-to-digital converter (ADC) tailored for sub-cortical neural signal detection. To optimize dynamic range and reduce power consumption, a novel adaptive zoom-and-tracking architecture is introduced, enabling the ADC to dynamically adjust its reference window based on LFP trends while maintaining high-resolution capture of APs. The proposed system integrates a bootstrapped track-and-hold circuit, a differential capacitive DAC, and a strong-arm comparator in the analog front-end, alongside a digital FIR filter and SAR logic with zoom-range control in the digital domain. Simulations validate the functionality of each subsystem independently and in concert, demonstrating the system’s ability to dynamically isolate APs from LFP-dominated baselines while reducing analog power draw by over 60% compared to fixed-range ADCs. This work offers a promising approach for scalable, energy-efficient neural recording architectures suited to future BMI applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162713</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography</title>
<link>https://hdl.handle.net/1721.1/162712</link>
<description>Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography
Gupta, Shreya
Coronary artery disease is the leading cause of mortality globally, resulting in an urgent and critical need to better understand both vessel morphology and the processes of intervention. Angioplasty is an intervention which causes a previously constricted vessel to expand via placement of a stent, and is affected by numerous characteristics of the vessel such as calcium eccentricity and size, wall thickness, and prior lumen size. Being able to accurately assess whether a stent will properly expand allows cardiologists to pursue pre-stenting calcium lesion modification strategies that help avoid dangerous complications of improper stenting. This work introduces a pipeline for post-stenting lumen area prediction from pre-stenting optical coherence tomography (OCT) images. This pipeline includes morphological correction of OCT image segmentations, explainable feature extraction from OCT segmentations, and a predictive transformer network that combines morphological features with injected stent information. The aim is for such a pipeline to be used to support clinical decision making.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162712</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/162711</link>
<description>Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation
Fu, Evelyn
Accurately simulating object dynamics based on real-world perception inputs has wide applications in digital twins and robotic manipulation. Yet, doing so requires practitioners to carefully measure and reconstruct the dynamic and geometric properties of the objects, which is time-consuming and requires domain expertise. This project proposes an automatic pipeline to construct 3D representations from a collection of real objects, which can further be used to generate assets with accurate visual texture and collision geometry to be used in simulation. This pipeline will be designed to have minimal hardware requirements and aim to be efficient in time for physical actuation to maximize data collection on minimal hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162711</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-based Planning for Efficient Task Execution</title>
<link>https://hdl.handle.net/1721.1/162710</link>
<description>Model-based Planning for Efficient Task Execution
Ding, Wenqi
Robotic agents navigating 3D environments must continuously decide their next moves by reasoning about both visual observations and high-level language instructions. However, they plan in a high-dimensional latent space, opaque to human collaborators. Hence, it is difficult for humans to understand the agent’s decision-making process. This lack of interpretability hinders effective collaboration between humans and robots. The key question we are trying to answer in this thesis is: Can we build a unified planning framework that fuses visual and language into a single, interpretable representation, so that humans can interpret robots’ decisions? We propose a model-based planning framework built around pretrained vision-language models (VLMs). We show that VLMs can be used to plan in a unified embedding space, where visual and language representations can be decoded back to human-interpretable forms. Empirical evaluation on vision-language navigation benchmarks demonstrates both improved sample efficiency and transparent decision making, enabling human-in-the-loop planning and more effective human-robot collaboration.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162710</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global Non-Convex Optimization with Integer Variables</title>
<link>https://hdl.handle.net/1721.1/162709</link>
<description>Global Non-Convex Optimization with Integer Variables
Kriezis, Demetrios C.
Non-convex optimization refers to the process of solving problems whose objective or constraints are non-convex. Historically, this type of problems have been very difficult to solve to global optimality, with traditional solvers often relying on approximate solutions. Bertsimas et al. [1] introduce a novel approach for solving continuous non-convex optimization problems to provable optimality, called the Relaxation Perspectification Technique - Branch and Bound (RPT-BB). In this thesis, we extend the RPT-BB approach to the binary, mixed-binary, integer, and mixed-integer variable domains. We outline a novel branch-and-bound algorithm that makes use of the Relaxation Perspectification Technique (RPT), as well as binary, integer, and eigenvector cuts. We demonstrate the performance of this approach on two representative non-convex problems, as well as two real-world non-convex optimization problems, and we benchmark its performance on BARON and SCIP, two state-of-the-art optimization solvers for non-convex mixed-integer problems. We observe that our algorithm, despite being more general, is able to outperform the state-of-the-art solvers on many problem instances.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162709</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest</title>
<link>https://hdl.handle.net/1721.1/162708</link>
<description>Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest
Li, Jason
The deployment of large language models (LLMs) as autonomous agents is transforming the software development landscape. Increasingly more engineers are using natural language agents to expedite and guide development workflows, while large organizations are investing heavily on building agentic systems for tasks such as code generation and code repair. A key challenge in developing such systems is tuning agent hyperparameters— settings that affect performance such as choice of model, temperature settings, and context window sizes. As system complexity grows, the hyperparameter space expands, complicating optimization under real-world compute and time constraints. In this work, we present Palimpzest[1] as an agentic optimizer able to balance cost and performance objectives by tuning agentic hyperparameters. We demonstrate that Palimpzest can tune our agent hyperparameters at 8.5 times lower cost and with 24 times greater time efficiency compared to the conventional grid search. By integrating our custom-built Debugger and Code Editor Agents as new operators within Palimpzest, we enhance the system’s ability to resolve real-world GitHub issues. And to facilitate hyperparameter selection, we also introduce File Coverage, Report Accuracy, and Patch Similarity along with the traditional SWE-Bench Score as quality evaluation methods used by Palimpzest’s optimization loop. When evaluated on the SWE-Bench Lite[2] benchmark, our optimized system achieves a 15% score at a significantly lower cost compared to previous approaches.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162708</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems</title>
<link>https://hdl.handle.net/1721.1/162707</link>
<description>Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems
Lau, Mary
Anomaly detection remains a persistent challenge in machine learning due to the extreme class imbalance, high cost of false negatives, and the need to regulate false positives in realworld settings at scale. This thesis introduces Tail-end FPR Max Recall, a business-aware evaluation framework designed for such constrained environments. Using this framework, we benchmark LightGBM—a gradient boosting method known for its computational efficiency and predictive accuracy—on an imbalanced dataset, comparing its performance against standard academic evaluation criteria. Our results demonstrate that Tail-end FPR Max Recall fills critical gaps left by standard academic criteria, providing a more realistic assessment of model performance that aims to maximize recall while enforcing a false positive rate budget. Beyond benchmarking, we propose two strategies that incorporate deep learning methods to augment the already strong performance of gradient boosting: (1) using generative models to produce synthetic minority-class samples that outperform traditional oversampling techniques, and (2) using neural embeddings to improve feature representation for anomaly detection. Together, these contributions offer a methodology for evaluating and improving anomaly detection pipelines in domains where rare, high-impact events must be detected while meeting strict operational demands.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162707</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification</title>
<link>https://hdl.handle.net/1721.1/162706</link>
<description>FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification
Kandeh, Stephen
In this work, a system of processors connected to an FPGA is interfaced with a custom analog frontend and used to create a verification environment for cryogenic devices. In particular, this thesis focuses on the technical structure of that system. Current validation efforts often rely on commercially available arbitrary waveform generators (AWGs) and oscilloscopes, which, while highly capable, are often prohibitively expensive and poorly suited for large-scale or parallelized testing environments. As noted in industry reports, scaling such instrumentation introduces significant challenges in cost, calibration, and signal synchronization, making them inefficient for high-resolution or high-speed analyses in multi-channel systems [1]. On the other hand, an FPGA provides the necessary performance to increase parallelism without a proportional increase in cost, greatly improving testing resolution and speed. When augmented with a set of processors, we introduce a level of accessibility and automatability not currently present in commercial products. To be clear, while the board was designed with the testing of nanowires in mind (and is not capable of measuring DC voltages), it can still be combined with separate lab equipment to interact with Josephson Junction based devices. That said, the flexibility of this system allows for a generalized application to any electronic that demands a specialized testing procedure involving arbitrary signal processing and generation. The money, time, and energy that this innovation will save on cryogenic electronic validation will significantly improve our progress in developing these technologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162706</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Efficient Real-time Operating Systems on Chip</title>
<link>https://hdl.handle.net/1721.1/162705</link>
<description>Energy Efficient Real-time Operating Systems on Chip
Kang, Ezra H.
Autonomous micro-robots are crucial for several tasks, such as search and rescue, noknowledge mapping, and navigation. Without an external power connection, these robots are constrained by their on-platform energy capacity. The power consumption of actuation systems used in micro-robots is within the same magnitude of the power consumption of the compute system. Thus, the remaining factor for enabling these micro-robots is associated with the design of energy-efficient compute systems. Energy usage of compute systems is typically dominated by memory operations, which previous efforts have attempted to mitigate with memory efficient software and hardware. These efforts are enabled with the software/hardware interface, which is implemented as an Operating System (OS). However, Operating Systems for energy-efficient platforms have not been fully explored. Current approaches utilize full general-purpose Operating Systems such as Linux, which can incur large memory and compute overhead penalties. These overheads not only consume the typically limited memory resources of energy-efficient systems, but also increase the number of memory accesses and CPU cycles, both of which are significant contributors to energy consumption. To address these concerns, we propose the design of a computational and memory efficient Real-time Operating System (RTOS). Our RTOS is designed to minimize both memory footprint and compute cycle overhead. It achieves this primarily through direct physical memory access, cycle-efficient task scheduling, and minimal runtime services to avoid unnecessary processing. Additionally, the modular RTOS kernel includes only the components required by an application in the final binary, reducing code size and memory usage without compromising functionality. The design enables the utilization of energy-efficient hardware accelerators and software, allowing for execution of robotics workloads with minimal memory and cycle overhead. When comparing robotics algorithms implemented on our proposed RTOS and baseline OSes, our design was able to achieve a 99% reduction in memory footprint. Additionally, it achieved up to a 47% increase in throughput. Thus, our design demonstrates a direct reduction in memory and CPU cycle overhead, which in turn lowers total system memory and energy consumption. The proposed design was demonstrated and verified on a resource constrained system-on-chip on the AMD Virtex Ultrascale+ VCU118 FPGA.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162705</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History</title>
<link>https://hdl.handle.net/1721.1/162704</link>
<description>Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History
Lu, Claire
Proper cell-cell communication is essential for multicellular development, from embryogenesis to stem cell differentiation. To map these networks, we developed IRIS (Intracellular Response to Infer Signaling state), a semi-supervised deep learning method that fits conditional variational autoencoders (CVAE) to single-cell RNA sequencing (scRNA-seq) data. IRIS is able to annotate cellular signaling states of individual cells using only their gene expression. Currently, IRIS has been validated in developmental contexts, including gastrulation, early endoderm organogenesis, and mesoderm lineages in mouse embryos. However, its predictions often show extremely high or extremely low confidence, suggesting a need for methods to prevent overconfidence and better account for uncertainty. To generalize IRIS to broader cell-cell communication problems, we combined engineering and experimental approaches, integrating uncertainty quantification techniques with new biological datasets. We implemented three approaches for estimating uncertainty in IRIS predictions: stochastic sampling, Monte Carlo dropout, and ensemble prediction. These approaches were evaluated on two new endoderm and mesenchyme combinatorial perturbation screens. Across all methods, uncertainty values reliably reflected the varying difficulty of predicting different signaling pathways, driven by both biological complexity and dataset representation. Moreover, higher uncertainty was consistently associated with lower prediction accuracy, confirming uncertainty as a useful proxy for model confidence. All three methods identified similar high-uncertainty cell populations, supporting their consistency and validity. By incorporating uncertainty quantification into IRIS, we provide more robust and interpretable predictions that can guide future experiments and enhance the model’s applicability across diverse biological contexts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162704</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications</title>
<link>https://hdl.handle.net/1721.1/162703</link>
<description>Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications
Le, Khang D.
Current transformer magnetic energy harvesters (CTMEHs) harvest magnetic energy from an AC current-carrying conductor and convert this energy into usable electrical energy for use by various low-power devices, such as sensors and microcontrollers. The amount of power harvested by CTMEHs is determined by the primary current passing through the conductor; however, variables such as the magnetic core’s dimensions, magnetic properties, and turn count also influence performance. Previous works have focused mainly on analytical or numerical modeling of CTMEH behavior or improving power harvest performance given a specific magnetic core material. Some existing research has compared the effects of different core materials on CTMEH power harvest in limited fashion; but a comprehensive, comparative study of high permeability, high saturation flux density CTMEHs had yet to be explored. This thesis establishes core material as the primary independent variable along with primary current and frequency during testing to isolate the effects of magnetic properties on determining the amount of power a magnetic core can harvest under different current conditions. The thesis concludes that nanocrystalline material excels at lower-current applications, while silicon steel material offers better performance at higher-current applications across all frequencies when used as CTMEHs, offering system designers enticing material choices depending on the nature of the application.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162703</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Eliciting Visualization Attitudes with Repertory Grids</title>
<link>https://hdl.handle.net/1721.1/162702</link>
<description>Eliciting Visualization Attitudes with Repertory Grids
Hua, Dana
Research in public data communication typically focuses on improving the processes of encoding and decoding, answering the question of how to design a visualization to best communicate information to an audience. However, by treating visual communications as simply conduits for information, we ignore an important aspect of how people interact with communications. We ignore the attitudes – the thoughts, feelings, and intentions toward action – a person may form from communicative artifacts based on their personal values and experiences. Recent research has demonstrated that—much like natural language—readers of visualizations make social attributions: inferences about the identities and characteristics of an artifact’s makers, modes of distribution, and tools of production. In this thesis, I contribute a method to systematically map the visualization attitudes of an individual and the associated ideologies of their sociocultural group, by adapting the repertory grid technique from clinical psychology, to the context of data visualization. I demonstrate the effectiveness of this mixed methods approach by eliciting both the attitudes towards a visualization most salient to an individual, and the design features of the visualization that inform each attitude. This method offers a new way of exploring the content and latent structure of visualization attitudes, which opens new avenues for socioculturally-informed and intervention-driven research in data visualization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162702</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing scheduling for stream structured programming for StreamIt</title>
<link>https://hdl.handle.net/1721.1/162701</link>
<description>Optimizing scheduling for stream structured programming for StreamIt
Dow, Nicholas Lee
As straightforward increases in performance on general purpose CPUs slow down, the shift to application specific implementations and hardware has accelerated. This shift to towards specialization improves performance but often at the cost of developer productivity in learning these new tools. StreamIt is a Domain Specific Language developed to increase performance of streaming applications while being relatively user-friendly. While designed to be parallelized easily, the scheduling backend of the StreamIt compiler is not adapted to the heterogeneous and distributed nature of new accelerator hardware. This thesis details the design and development of a scheduler interface that enables hardware customized schedulers to be developed quickly. The scheduler interface allows for schedulers to take advantage of the unique compiler optimizations enabled by StreamIt’s structure. Two schedulers, one search based and another heuristic based, are built using this interface to schedule StreamIt workloads to optimize differing metrics such as throughput and latency. Our experiments evaluate the performance of these workloads, and details future direction for expanding the interface and scheduler designs that could take advantage of it.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162701</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions</title>
<link>https://hdl.handle.net/1721.1/162700</link>
<description>Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions
Flynn, John M.
Portable, Low-Field MRI broadens access and enables numerous new applications such as point-of-care. Operating outside an RF-shielded room introduces electromagnetic interference (EMI), degrading further the signal-to-noise ratio (SNR) which is already diminished due to the lower magnetic fields used in portable imaging. Existing methods to reduce EMI perform well in simple noise environments, but can struggle with more complex profiles. Relaxing the linear assumptions is hypothesized to bring more robust mitigation algorithms. A system-wide characterization of SNR challenges was carried out on a rebuilt 800G scanner, existing techniques were validated, and new signal processing approaches were explored to drive image quality upwards. Various analytical approaches showed promise, such as dynamic coils/preamps, averaging methods, calibration, and smoothing methods. Groundwork was laid for learning-based methods throughout the pipeline. This work serves as an important baseline for the numerous experiments necessary for the full-system optimization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162700</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Atom-Light Scattering in the Quantum Regime</title>
<link>https://hdl.handle.net/1721.1/162699</link>
<description>Exploring Atom-Light Scattering in the Quantum Regime
Lu, Yu-Kun
Ultracold atoms and molecules are promising platforms for exploring modern quantum science and technologies, such as quantum simulation and quantum computation. Here, light is the essential tool to manipulate and probe these systems. However, unlike in condensed matter systems where scattering experiments are routinely employed to characterize materials, ultracold atom and molecule systems are usually probed by imaging and not by light scattering.&#13;
&#13;
In this thesis, I present a systematic investigation of atom-light scattering under various scenarios. When atoms are confined in optical lattices, light scattering can be used to explore single-body, two-body, and many-body physics. Focusing on single-atom physics, I study coherent and incoherent light scattering of single-atom wavepackets and the relation to which-way information. For two atoms tightly localized to a 20nm size on the same lattice site, I demonstrate the strong electric dipolar interactions between them, which result in large momentum transfers and spectroscopic shift of the resonance. On the many-body side, I show how light scattering can reveal distinct quantum phases at thermal equilibrium or defect generation in dynamical ramps. For atoms released from the optical lattice, I demonstrate that light scattering can read out the quantum statistical information and initial density correlations hidden in the interference of atomic wavepackets.&#13;
&#13;
When atoms move freely in the form of degenerate quantum gases, I investigate how quantum statistics, phase transition, and interactions modify the atomic pair correlation and consequently the light scattering. For thermal gas at high density, I demonstrate nonlinear optical effects from high optical density and high scattering rate. &#13;
&#13;
Finally, I describe our recent efforts on manipulating atoms at subwavelength length scales. I discuss our attempts in optical tweezers and in optical lattices, and the prospect of observing magnetic pairing between two distant layers under attractive dipolar interaction.&#13;
&#13;
The techniques presented in this thesis should be of general use for pursuing quantum science and technology with ultracold atoms and molecules.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162699</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings</title>
<link>https://hdl.handle.net/1721.1/162698</link>
<description>Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings
Goel, Abhinav
The inclusion of symmetries as an inductive bias, known as “equivariance”, often improves generalization on geometric data (e.g. grids, sets, and graphs). However, equivariant architectures are usually highly constrained, designed for pre-chosen symmetries, and cannot be applied to datasets with different symmetries. This work constructs a single model that is simultaneously equivariant to several groups, by simply regulating a certain input feature. Starting with a permutation-equivariant base model respecting the full Sₙ symmetry group, we can obtain subgroup G ⊆ Sₙ equivariance by using a symmetry-breaking input that is G-symmetric. Under mild conditions, the resultant network is only G-equivariant. But finding an input with automorphism group exactly G is computationally hard, which can be overcome by relaxing exact symmetry breaking to approximate symmetry breaking. This is done by leveraging the notion of 2-closure to derive fast algorithms. This method is validated on symmetry selection, multitask, and transfer learning settings, demonstrating that a single network equivariant to multiple permutation subgroups outperforms both separate equivariant models or a single non-equivariant model.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162698</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation</title>
<link>https://hdl.handle.net/1721.1/162697</link>
<description>Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation
Choi, Sun Mee
The advancement of semiconductor manufacturing processes has allowed for the availability of powerful microcontrollers at lower costs, granting system designers the flexibility to select between analog and digital signal processing techniques. Enabled by recent developments in low-power successive approximation register (SAR) analog-to-digital converter (ADC) technology, a digital approach to root-mean-square (RMS) measurement is proposed. The work begins with an explicit accumulation and averaging approach, and a set of improvements were designed to increase measurement accuracy and reliability. Algorithms are compared using the metrics of error, power efficiency, latency, and digital overhead. High-performing and power-efficient digital RMS measurement methods could be valuable for decentralized instrumentation systems such as smart grids and factory automation where long-lasting handheld and portable solutions are becoming critical.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162697</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hosting LLMs on Shared GPUs</title>
<link>https://hdl.handle.net/1721.1/162696</link>
<description>Hosting LLMs on Shared GPUs
Choi, Kenneth K.
Large language models (LLMs) have emerged as powerful tools for a wide array of applications. Serving multiple LLMs on shared GPUs has increasingly gained attention as single providers need to support multiple applications (summarization, chat, code generation), different model versions (A/B testing), and various types of customers. However, multi-model serving is particularly challenging, as static memory partitioning can lead to severe under-utilization, fragmentation, and latency spikes, while dynamic loading of model weights can cause unacceptable downtime due to high model loading overheads. To address these issues, we introduce hierarchical paging, a novel key-value (KV) cache management strategy, and we implement it within the vLLM serving engine. Hierarchical paging organizes GPU memory into a two-level hierarchy: large contiguous memory blocks allocated to individual models, which are then subdivided into smaller blocks that are allocated to different requests issued to that model. Our design enables dynamic memory sharing across models, improving model throughput and overcoming key problems of existing approaches. We detail our implementation and present end-to-end experiments that showcase these throughput improvements under different workloads. We include further evaluations on the runtime overheads of our hierarchical paging implementation, which show that the overheads are insignificant. Most importantly, we demonstrate that hierarchical paging is easy to implement, optimizing for implementation effort and maintainability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162696</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation</title>
<link>https://hdl.handle.net/1721.1/162695</link>
<description>A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation
Cheng, Emily
Synthesizing realistic tabular data is crucial for any analytical application, including policy evaluation related to household energy use. However, detailed household-level consumption data, necessary for such evaluation, are scare at fine geographic scales, as public surveys like the U.S. Residential Energy Consumption Survey (RECS) provide too few observations. We address this gap by developing a topology-guided diffusion-based generative model that produces realistic synthetic household data, and our approach handles two key challenges in this setting: (1) mixed continuous and discrete features and (2) strong hierarchical dependencies among variables. To handle categorical features, we build upon recent advancements in discrete diffusion, particularly TabDDPM [1] and TabDiff [2], which discretize the diffusion process through noise transition matrices, effectively extending diffusion methods to discrete tabular domains. To address hierarchical dependence, we include (1) a structure-aware noise schedule that injects noise from the leaves to the root along an approximate Chow–Liu tree constructed from the variables and (ii) a masked self-attention denoiser that aligns with the same graphical structure. Extensive experiments show that our structured diffusion model outperforms the baseline TabDiff on data with tree-like dependencies, due to the inductive bias from our structure-aware noise schedule. On data that only approximately follows a tree, such as the RECS dataset, our model maintains competitive performance, only slightly outperforming standard diffusion methods. These results highlight the potential for future work to further optimize the tradeoff between structural approximation and estimation accuracy and for future work beyond the energy domain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162695</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees</title>
<link>https://hdl.handle.net/1721.1/162694</link>
<description>On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees
Gregory, Cale
This thesis evaluates the validity of current dynamic treatment regime algorithms and presents a novel data structure for extracting treatment decisions from unstructured clinical notes. The main contribution is the Clinical Decision Tree (CDT) which uses large language models (LLMs) to extract key decisions in chronic disease treatment. This addresses the main pain points in dynamic treatment regimes of low interpretability and reliance on poorly collected data for traditional machine learning methods. This work contains extensive experiments on mortality prediction, time series forecasting, and synthetic patient modeling. Experiments show that vital-based representations do not capture enough meaningful data about a patient to accurately predict and evaluate new treatment methods. By utilizing latent embeddings and vector search, experiments show that the collected vitals of patients fail to differentiate the outcomes of the related patients. Conversely, the clinical notes contain complex and substantial information about clinical decision making. LLMs enable the valuable knowledge extraction from unstructured data. Utilizing LLMs, experimental results and expert evaluation indicates that CDTs can extract and distill interpretable treatment decisions. Thus, CDTs are a valuable tool that can be refined to increase confidence in treatment decisions and identifying rare and uncommon medical practices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162694</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova</title>
<link>https://hdl.handle.net/1721.1/162693</link>
<description>"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova
Han, Aileen
Agent-based modeling is a technique that allows students to reason about and create models of real-life phenomena. However, the programmatic implementations of this technique, such as StarLogo Nova, often introduce “friction”; students may get stuck on the syntactical details of the implementation before being able to engage in the mechanistic thinking behind their models. In order to shift students’ focus towards the goal of understanding the systems they are building, we set out to create an AI-powered assistant for StarLogo Nova that can explain and debug students’ code. After identifying and experimenting with various parameters of AI models in an attempt to improve their performance, we were able to build the StarLogo Turtle Helper, an easily accessible assistant integrated into the platform that can produce accurate responses to StarLogo-related questions. Through this process, we discovered two key properties of these models: first, the method through which these models use provided documentation (called retrieval-augmented generation, or RAG) is quite rudimentary, so any background knowledge should be included in the prompt or the model’s system instructions instead. Second, these models perform best if they are designed to only serve one purpose, so creating multiple models and chaining them together may be the best way to achieve more complex functionality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162693</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease</title>
<link>https://hdl.handle.net/1721.1/162692</link>
<description>Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease
Li, Jonathan
This work focuses on the progression from metabolic dysfunction-associated fatty liver to metabolic dysfunction-associated steatohepatitis, a more serious prognosis that can lead to liver failure and death. Additional adverse progressed outcomes include hepatic failure, fibrosis, cirrhosis, and malignant neoplasm of liver and intrahepatic bile ducts. We explore the possibility of using different machine learning techniques, including logistic regression, XGBoost, random forest, and decision trees to predict the likelihood of progression. We use data from Massachusetts General Brigham to train our models, incorporating demographics, physical measurements, lab results, and doctor notes. As a result of this project, we our best model was an XGBoost classifier with an AUROC of 0.800 with random forest at a similar performance of 0.786. However, all of our models had low AUPRC and sensitivity, indicating both overfitting and an imbalanced dataset.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162692</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Traceability via OTrace Concepts and Implementation</title>
<link>https://hdl.handle.net/1721.1/162691</link>
<description>Data Traceability via OTrace Concepts and Implementation
Farooq, Ashar
Financial transactions are commonplace in the modern world. Everyday consumers make purchases on many e-commerce sites and often use many third-party financial services, such as to predict your credit score, to obtain customized budget recommendations, and to find out which specific loan is the best for them. These financial services often need financial information from the consumer, which is not always clear to the consumer. In other words, consumer data are being used without their knowledge and consent. The proposed solution of using a traceability protocol called OTrace aims to mitigate this issue of not knowing where a consumer’s data is along with what is being done with it. This paper will aim to bolster OTrace to be more representative of a protocol that consumers can actually use as a service, and financial institutions can have trust that this will solve the problem of consumers not knowing which third-party financial services have their data. In other words, this work will create a more general traceable and accountable data sharing system specification that includes the OTrace layer on top of an OAuth layer that will be complemented with a model deployment example. The addition of more relevant OTrace API endpoints corresponding to a new specification along with an entire new OTrace Web implementation along with analysis will guide the data traceability world, data privacy world, open banking world, financial world and ultimately the global world forward. There will be a model deployment of an OTrace service on top of an OAuth protocol that can allow everyone to see it being used by various parties that can ultimately scale up to fix the problem of unintended data usage and lack of transparency of location of data.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162691</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivariant Autoregressive Models for Molecular Generation</title>
<link>https://hdl.handle.net/1721.1/162690</link>
<description>Equivariant Autoregressive Models for Molecular Generation
Kim, Song Eun
In-silico generation of diverse molecular structures has emerged as a promising method to navigate the complex chemical landscape, with direct applications to inverse material design and drug discovery. However, 3D molecular structure generation comes with several unique challenges; generated structures must be invariant under rotations and translations in 3D space, and must satisfy basic chemical bonding rules. Recently, E(3)-equivariant neural networks that utilize higher-order rotationally-equivariant features have shown improved performance on a wide range of atomistic tasks, including structure generation. Previously, we have developed Symphony, an E(3)-equivariant autoregressive generative model for 3D structures of small molecules. At each sampling iteration, a single focus atom is selected, which is then used to decide on the next atom’s position within its neighborhood. Symphony built on previous autoregressive models by using message-passing with higher-order equivariant features, allowing a novel representation of probability distributions via spherical harmonic signals. Symphony’s performance approached that of state-of-the-art diffusion models while remaining relatively lightweight. However, it continued to face challenges in error accumulation and determining bond lengths, and it was only evaluated against small organic molecules. Here, we expand on Symphony’s capabilities and make it more compatible with larger atomic structures. We add improvements to the embedders, split the radial and angular components when predicting atom positions, and increase the radial cutoff for atomic neighborhoods considered during prediction. We also increase Symphony’s training and inference speeds through a new implementation in PyTorch, making inference nearly 4x faster than previously. In addition, we demonstrate its effectiveness across a variety of tasks, including small molecule and protein backbone generation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162690</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions</title>
<link>https://hdl.handle.net/1721.1/162689</link>
<description>Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions
Das, Gaurab
Although advances in security have strengthened defenses in digital financial systems, attackers increasingly rely on social engineering to achieve their goals. These attacks are difficult to detect and prevent with existing security measures. To address this, we propose Vigilis, a fraud-protected application that employs advanced language models to counter such attacks in calls, texts, and payments. We first collect and make available a corpus of fraudulent calls from the Internet and train lightweight transformer-based models that achieve fraud detection accuracies of up to 94% and 87% on transcript and audio modalities, respectively. We integrate these models into a real-time call system within Vigilis that operates entirely on-device, enabling accurate fraud detection in an efficient and privacy-preserving manner. We then extend Vigilis to incorporate context-aware transaction authentication, where the underlying social context behind a transaction is determined from calls, texts, and browsing history and used to infer the transaction’s validity. By uniquely incorporating social concepts into traditional cybersecurity techniques, we attempt to counter and mitigate issues related to social engineering attacks in financial fraud.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162689</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>GDSVD: Scalable k-SVD via Gradient Descent</title>
<link>https://hdl.handle.net/1721.1/162688</link>
<description>GDSVD: Scalable k-SVD via Gradient Descent
Gan, Emily
We show that a gradient-descent with a simple, universal rule for step-size selection provably finds k-SVD, i.e., the k ≥ 1 largest singular values and corresponding vectors, of any matrix, despite nonconvexity. There has been substantial progress towards this in the past few years where existing results are able to establish such guarantees for the exact-parameterized and over-parameterized settings, with choice of oracle-provided step size. But guarantees for generic setting with a step size selection that does not require oracle-provided information has remained a challenge. We overcome this challenge and establish that gradient descent with an appealingly simple adaptive step size (akin to preconditioning) and random initialization enjoys global linear convergence for generic setting. Our convergence analysis reveals that the gradient method has an attracting region, and within this attracting region, the method behaves like Heron’s method (a.k.a. the Babylonian method). Empirically, we validate the theoretical results. The emergence of a modern compute infrastructure for iterative optimization coupled with this work is likely to provide a means of solving k-SVD for very large matrices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162688</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology</title>
<link>https://hdl.handle.net/1721.1/162687</link>
<description>Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology
Chen, Tina T.
Transcription is a dynamic process with a multitude of characteristics, including transcript level, burst frequency, amplitude, and variability. Single-cell RNA sequencing data analysis often focuses on comparing transcription levels. However, these analyses capture only a portion of the wealth of information conveyed by transcription. The quantification and analysis of transcriptional variability poses an opportunity to study transcription and gene regulation from a new angle. Transcriptional variability has already been implicated in a number of biological processes, including in immune system development and in aging. Yet, the most appropriate method for measuring transcriptional variability in single-cell data has remained relatively unclear. Here, we simulated single-cell data with varying dispersion and dataset size to assess the relative responsiveness of the Gini index, variance-to-mean ratio, variance, and Shannon entropy to variability in single-cell counts. We found that the variance-to-mean ratio scales approximately linearly with increasing dispersion, and that it is scale-invariant. The Gini index displayed paradoxical behavior, and Shannon entropy was not scale-invariant. Thus, we applied the variance-to-mean to measure transcriptional variability in two publicly available datasets studying congenital heart defects in mouse models. We first found that change in transcriptional variability does not correlate with gene characteristics such as transcript level and evolutionary gene age. We also found that using change in transcriptional variability to focus GSEA and TF motif enrichment analyses revealed both genes with known involvement in cardiomyopathy and new genes and pathways as potential targets for future study. Notably, many of the genes and pathways identified through transcriptional variability analysis were not found by differential expression analysis, suggesting that transcriptional variability can provide additional biologically relevant information beyond what is observed from studying mean expression alone.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162687</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform</title>
<link>https://hdl.handle.net/1721.1/162686</link>
<description>Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform
Heiberger, Harry G.
In recent years, social annotation systems have become a popular and effective tool for hosting collaborative discussions on assigned readings. One such tool created by our lab is NB. Over the last twelve years, hundreds of instructors have incorporated NB within their classes, with over 50,000 students leaving millions of annotations [1]. While feedback for NB has mostly been positive, one major limitation is its difficulty in annotating documents with nested media types. As multimodal forms of learning beyond just text are becoming increasingly common in educational assignments, having the ability to annotate beyond simple text documents would greatly increase the utility of NB in the modern classroom. This work seeks to remedy this issue by expanding the types of documents NB can successfully annotate, specifically focusing on three mixed-media issue types: independently moving text components, image annotation, and video annotation. We will explore the design space of possible implementation strategies for these features and discuss the specific design decisions that were made when adding them to NB. We hope that by increasing the types of documents NB can annotate, we will better fulfill its goal of enhancing student engagement and learning.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162686</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes</title>
<link>https://hdl.handle.net/1721.1/162685</link>
<description>Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes
Eppinger, Aria R.
Adverse pregnancy outcomes (APOs), such as preeclampsia, fetal growth restriction, and preterm birth, occur in 10-15% of pregnancies. There is limited knowledge of how the cellular states in the placenta and decidua tissues are altered in women with particular APOs or may contribute to APOs. Single-cell RNA sequencing (scRNAseq) approaches have characterized cellular populations and interactions at the maternal-fetal interface using traditional dimensionality-reducing methods such as UMAP-based clustering. However, these techniques may generate limited representations of nuanced cellular functions and biological relationships among and within cell clusters. Pareto Task Inference (ParTI), a dimensionality reduction technique that fits data to an n-dimensional polygon or polytope, models how cells optimize among multiple biological functions and transition between states. We applied ParTI to assess its ability to identify nuanced cellular states and intercellular relationships and to highlight biological mechanisms underlying specific APOs. We analyzed scRNAseq data from 50 whole placental homogenates collected from healthy pregnancies and those complicated by fetal growth restriction (FGR), preterm preeclampsia (PrePET), spontaneous preterm birth (PTB), term preeclampsia or gestational hypertension (TermPET/GHTN), or type 1 diabetes (DM1). ParTI was applied to the dataset with 1) all main cell lineages (B-cells, trophoblasts, stromal, endothelial, Haufbauer, T-NK, maternal myeloid cells) and 2) syncytiotrophoblasts (SCTs), a sublineage of trophoblasts. Marker genes and gene set enrichment analysis for the ParTI polytope vertices, called archetypes, were performed to assess the biological states associated with the archetypes. We demonstrated that the ParTI polytope can separate both broad cell lineages and sublineages, suggesting that iteratively applying ParTI can serve as an alternative clustering approach when cell-lineage marker genes are previously known. Additionally, ParTI applied to SCTs separated healthy controls from pregnancies complicated by specific APOs. Gene set enrichment analysis of the cells proximal to the archetypes suggests biological differences in SCTs with specific APOs compared to the controls. Thus, ParTI can identify biological mechanisms underlying specific APOs and be applied to additional datasets to uncover biological relationships among and within cell-type clusters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162685</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal</title>
<link>https://hdl.handle.net/1721.1/162684</link>
<description>Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal
Cuevas, Elie E.
Recursive algorithms are a natural and expressive way to traverse complex data structures, but they often miss opportunities for optimization in modern compiler infrastructures like LLVM. This thesis explores a novel technique that temporarily transforms recursive traversals into synthetic loop-like structures, enabling existing loop-specific optimizations to apply, before transforming them back. By extending Clang’s semantic analysis and implementing a custom LLVM transformation pass, recursive traversals are initially structured into synthetic loops that can benefit from existing loop analyses and optimizations. After these optimizations are applied, the transformation restores the original recursive semantics, preserving program behavior while incorporating performance gains. Evaluation across custom microbenchmarks shows that while general recursive traversals suffer a modest overhead, workloads designed to benefit specific loop-focused optimizations achieve up to a 30% performance improvement. This demonstrates that even though the approach requires temporarily "misrepresenting" code to the compiler, selective exposure of recursive patterns to loop-based optimization infrastructure is practical and effective. This work establishes a proof-of-concept for compiler transformations that bridge recursion and iteration, paving the way for future systems that better optimize real-world recursive code without sacrificing clarity or maintainability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162684</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grounding Time Series in Language: Interpretable Reasoning with Large Language Models</title>
<link>https://hdl.handle.net/1721.1/162683</link>
<description>Grounding Time Series in Language: Interpretable Reasoning with Large Language Models
Chen, Lily
Can large language models (LLMs) classify time-series data by reasoning like a domain expert—if given the right language? We propose a method that expresses statistical time-series features in natural language, enabling LLMs to perform classification with structured, interpretable reasoning. By grounding low-level signal descriptors in semantic context, our approach reframes time-series classification as a language-based reasoning task. We evaluate this method across 23 diverse univariate datasets spanning biomedical, sensor, and human activity domains. Despite requiring no fine-tuning, it achieves competitive accuracy compared to traditional and foundation model baselines. Our method also enables models to generate expert-style justifications, providing interpretable insights into their decision-making process. We present one of the first large-scale analyses of LLM reasoning over statistical time-series features, examining calibration, explanation structure, and reasoning behavior. This work highlights the potential of language native interfaces for interpretable and trustworthy time-series classification.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162683</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>National crop field delineation for the United States</title>
<link>https://hdl.handle.net/1721.1/162681</link>
<description>National crop field delineation for the United States
Chen, Zitong
Comprehensive and accurate crop field boundary maps are crucial for digital agriculture, land management, and environmental monitoring. However, no high-quality field boundary dataset is publicly available in the United States. This thesis addresses this gap by creating a new, large dataset and training a deep learning model capable of mapping field boundaries. We built a dataset of over 15,000 image-mask pairs using high-resolution National Agriculture Imagery Program (NAIP) satellite imagery and curated field boundary labels. This dataset covers a variety of leading agricultural states and includes images taken at different scales to capture a wide variety of field sizes and layouts. We used this dataset to train an adapted ResUNet++ neural network model designed to segment crop fields. The trained model achieved around 0.8 for pixel-level accuracy, showing it can generally identify field areas well. However, its performance in matching predicted individual field instances with the ground truth instances (measured by mean instance Intersection over Union, or mIoU) was around 0.5. This lower instance score was largely due to the post-processing step, which converts the model’s probability predictions into separate field instances. Despite this, the field polygons produced by our approach are visually coherent with satellite field images and can be readily used with geospatial tools like Google Earth Engine. Our work provides a practical starting point for future research on mapping fields across the contiguous U.S. Potential directions for improvements may involve developing sharper boundary predictions, exploring direct instance segmentation models, refining post-processing methods, and expanding the dataset to include more challenging areas.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162681</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex</title>
<link>https://hdl.handle.net/1721.1/162679</link>
<description>Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex
Hanly, Bianca Marie
A Self-Interference Canceler is the principle component that allows for Simultaneous Transmit And Receive (STAR) of radio signal broadcasting. Previous research and designs by other groups have resulted in systems that either operate at high powers or are capable of cancellation over a wide bandwidth. This work seeks to build upon previous research in order to design an analog SIC that is capable of both high power (∼100W) and wide instantaneous bandwidth (∼1GHz) cancellation. The system is designed as a vector modulator using off-the-shelf hybrid couplers and switches with a custom variable attenuator designed using PIN diodes in a Waugh attenuator architecture. The system was fabricated on a four layer PCB and measured with a network analyzer. Simulated results for variable attenuator and overall vector modulator are presented.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162679</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System</title>
<link>https://hdl.handle.net/1721.1/162678</link>
<description>Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System
Francis, Zachary R.
In the field of robotics, the development of household robots capable of performing everyday tasks continues to be a major area of research and practical interest. Many domestic chores—such as picking up and moving objects from one location to another—have been successfully performed by stationary robotic manipulators paired with visual perception systems. However, accomplishing more complex, varied, and spatially distributed tasks in real-world home environments requires a mobile platform with a more human-like form factor. These tasks demand greater flexibility, spatial awareness, and interaction capabilities than fixed systems can typically provide. This work focuses on the RBY1 robot from Rainbow Robotics, a humanoid platform designed to support advanced manipulation and mobility. A range of tools and modules were developed to enhance its functionality, including software for semantic perception, task execution, and environment interaction. This thesis provides a technical overview of these tools, highlighting their roles in collecting new datasets that can be used for semantic SLAM research. In the future, these tools can enable the robot to operate more effectively in domestic settings, towards the ultimate goal of enabling more capable home-assistive robots.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162678</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)</title>
<link>https://hdl.handle.net/1721.1/162677</link>
<description>Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)
Cunningham, Caroline K.
This thesis examined students’ programming process while using PyTutor, a generative AI tutor for introductory computer science students. This thesis had the research questions: (1) How does the process of test case creation, with or without PyTutor’s Test Case Runner, impact students’ programming process while using PyTutor? (2) How can prompt engineering of PyTutor’s system prompt be leveraged to improve AI Chat response quality with respect to: (a) reducing the amount of code revealed in the answer, (b) improving the conciseness of responses, and (c) having the AI chat give the student test cases as a tool to understand code correctness? (3) How does PyTutor’s responses from the updated prompt affect the programming process for computer science students? A key finding from a focus group in the first stage (n=9) was apart from test cases and was that the majority of participants who asked questions to PyTutor received at least three lines of code, unideal for PyTutor’s pedagogical purpose. This discovery inspired the next phase of this thesis of prompt engineering PyTutor, which resulted in an updated prompt. Responses from the both the updated prompt and the original prompt were scored using an evaluation rubric. For the Students thinking through problem category of the evaluation rubric, it was statistically significant that the distribution of points for responses from the updated prompt was greater than the distribution of points for responses from the original prompt. Finally, participants were asked to solve a programming problem using either PyTutor with the updated prompt (n=10) or PyTutor with the original prompt (n=2). Across the focus groups from the first and final stage, I found that fewer participants who used PyTutor with the updated prompt received at least three lines of code. Furthermore, participants who used PyTutor with the updated prompt required a greater number of messages to first receive three lines of code. Additionally, all four participants who received at least three lines of code from PyTutor with the updated prompt asked majority high-level questions. As participant feedback suggested that PyTutor’s responses for high-level questions could be repetitive, this data highlights a new direction of improving PyTutor’s responses when answering high-level questions to benefit students’ programming process.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162677</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Initial segments in ordinal recursion theory.</title>
<link>https://hdl.handle.net/1721.1/162619</link>
<description>Initial segments in ordinal recursion theory.
Dorer, David John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1979; Vita.; Bibliography: leaf 49.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162619</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tool and chip temperatures in machine shop practice</title>
<link>https://hdl.handle.net/1721.1/162618</link>
<description>Tool and chip temperatures in machine shop practice
Shore, Henry.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1924; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1924 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162618</guid>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The oxidation of sulphur dioxide in Cottrell precipitators of a contact sulphuric acid plant</title>
<link>https://hdl.handle.net/1721.1/162617</link>
<description>The oxidation of sulphur dioxide in Cottrell precipitators of a contact sulphuric acid plant
Haberstoh, Robert H.; Milligan, Sydney.; Roever, Paul H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1931
</description>
<pubDate>Thu, 01 Jan 1931 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162617</guid>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Production of carbon black by the decomposition of methane with electrically heated wires</title>
<link>https://hdl.handle.net/1721.1/162616</link>
<description>Production of carbon black by the decomposition of methane with electrically heated wires
Donatello, Dominic G.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1939; Includes bibliographical references (leaf 24).
</description>
<pubDate>Sun, 01 Jan 1939 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162616</guid>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models for investigating the unreliability of freight shipments by rail.</title>
<link>https://hdl.handle.net/1721.1/162615</link>
<description>Models for investigating the unreliability of freight shipments by rail.
Folk, Joseph Frederick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Vita.; Bibliography: leaves 279-284.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162615</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A set-theoretic approach to state estimation.</title>
<link>https://hdl.handle.net/1721.1/162614</link>
<description>A set-theoretic approach to state estimation.
Hnyilicza, Esteban.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaves 112-113.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162614</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a voltage-limiting device using SIC nonlinear resistors.</title>
<link>https://hdl.handle.net/1721.1/162613</link>
<description>Design of a voltage-limiting device using SIC nonlinear resistors.
Asamoah, William Kafui.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1974; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162613</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analog circuit simulator for the Connection Machine</title>
<link>https://hdl.handle.net/1721.1/162612</link>
<description>An analog circuit simulator for the Connection Machine
De Beus, Eric.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162612</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boiling heat transfer in rotating channels with reference to gas turbine blade cooling</title>
<link>https://hdl.handle.net/1721.1/162611</link>
<description>Boiling heat transfer in rotating channels with reference to gas turbine blade cooling
Mudawar, Issam Abdallah.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162611</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the developments in the construction, equipment and operation of street railway cars</title>
<link>https://hdl.handle.net/1721.1/162610</link>
<description>A study of the developments in the construction, equipment and operation of street railway cars
French, Grant Keith.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1920
</description>
<pubDate>Thu, 01 Jan 1920 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162610</guid>
<dc:date>1920-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Precision Binary Trait Association on PhylogeneticTrees</title>
<link>https://hdl.handle.net/1721.1/162565</link>
<description>High Precision Binary Trait Association on PhylogeneticTrees
Balogun, Ishaq O.
Understanding how genetic variation drives microbial phenotypes is fundamental to advancing microbiology, particularly in pathogenicity, drug resistance, and host adaptation. Traditional genome-wide association study (GWAS) methods fail to account for shared evolutionary history, confounding association analyses. Microbial GWAS approaches emerged to address this, but modern methods often lack the statistical power to detect associations while controlling false discoveries, and face computational limits at scale. Here, we present SimPhyNI (Simulation-based Phylogenetic iNteraction Inference), a computational framework for detecting binary trait-trait associations in microbial populations. &#13;
&#13;
SimPhyNI uses stochastic simulations of trait evolution on phylogenetic trees to detect positive and negative associations with high precision and recall. Benchmarking on large synthetic datasets, SimPhyNI achieved a precision-recall AUC (PR AUC) of 0.987 and 0.975 for positive and negative interactions, respectively, indicating near-perfect discrimination of true from neutral associations. Competing methods showed substantially lower performance, especially for negative associations. We further applied SimPhyNI to empirical datasets, recovering known biology and generating plausible hypotheses for novel mechanisms. &#13;
&#13;
Though tested here on binary traits, SimPhyNI’s design supports future extension to multi-state and continuous traits using generalized models. Its high recall also makes it well-suited for constructing gene interaction networks and identifying co-evolving trait modules. By combining evolutionary modeling with scalable statistics, SimPhyNI advances our ability to uncover the genetic interactions that drive microbial function, ecology, and disease.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162565</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices</title>
<link>https://hdl.handle.net/1721.1/162564</link>
<description>Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices
Rajan, Neena E.
The medical device industry, governed by a tight regulatory landscape, often relies heavily on structured Product Development Processes (PDPs) to bring innovative solutions to market. These structured processes create significant challenges when integrating technological innovations that emerge in the later stages of the development cycle. This study explores the complexities of this "innovation paradox" within large United States-based medical device corporations, examining how the rigidity of traditional PDP models affects the incorporation of innovative changes to in-flight projects. Drawing upon insights from a comprehensive literature review and a quantitative analysis utilizing a Monte Carlo simulation, this research highlights the impact of integrating an innovative change on the overall project timeline and cost. The simulation results show that introducing innovative changes to the PDP typically extends project timelines and increases the total net present costs and are affected by the timing of the change and its technological maturity. Introducing changes in later project phases significantly increases both duration and cost compared to earlier phases. Changes with lower technological maturity led to greater duration and cost escalations, especially when introduced late in the development cycle. To balance regulatory requirements and PDP agility, large medical device companies can adopt hybrid PDP models, establish dedicated innovation assessment teams, create flexible product designs, and focus on value-driven innovations that meet patient and market needs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162564</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Savaal: A system for automatically generating high-quality questions from unseen documents</title>
<link>https://hdl.handle.net/1721.1/162563</link>
<description>Savaal: A system for automatically generating high-quality questions from unseen documents
Chandler, Joseph A.
Assessing human understanding through exams and quizzes is fundamental to learning and advancement in both educational and professional settings. However, current solutions to automate the generation of challenging questions from educational materials and documents are insufficient, resulting in superficial or often irrelevant questions. While LLMs have been shown to excel in tasks like question answering, their usage on question generation is underexplored for general domains and at scale. This work presents Savaal, a scalable question-generation system that generates higher-order questions from documents, as well as a real-world system implementation for general use. Savaal accomplishes the following goals and objectives: (i) scalability, capable of generating hundreds of questions from any document (ii) depth of understanding, synthesizing higherorder concepts to test learners’ understanding of the material, and (iii) domain independence, generalizing broadly to any field. Rather than naively providing the entire document in context to an LLM, Savaal breaks down the process of generating questions into a three-stage pipeline. We demonstrate that Savaal outperforms the direct prompting baseline as evaluated by 76 human experts on 71 documents across conference papers and PhD dissertations. We additionally contribute a general system for serving Savaal in real-world scenarios. We demonstrate that our system is scalable, enabling fault-tolerant and horizontal scaling of each individual component in response to fluctuations in usage. Moreover, our architecture enables interactive usage from users and collaboration in groups, reflecting real-world organizations like classrooms or enterprises. We hope that the system enables scalable question generation for educational and corporate use-cases.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162563</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation</title>
<link>https://hdl.handle.net/1721.1/162562</link>
<description>Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation
Terakado, Daiki
This thesis presents a new integrated framework for evaluating in-space refueling architectures, focusing on their application to the human space missions such as Artemis. The framework tightly couples vehicle sizing with a boil-off control model, allowing the evaluation of various combinations of propellant types, refueling locations, and boil-off control. The model captures the dynamic interdependence between the components of the refueling system, the transport vehicle, the refueler, and the depot, using an iterative approach to ensure consistent mass estimates across configurations.&#13;
&#13;
The framework is applied to analyze human landing system (HLS) architectures with refueling in cis-lunar space. The key findings highlight the mass savings benefits of cryocoolers, the benefits of high Isp with Lox/LH2, the benefits with NRHO refueling for acceptable ΔV requirement, and positive and negative effects of reusability in mass and mission time. Furthermore, the study indicates that the number of required refueling events is more sensitive to payload and refueler capacity than to boil-off losses.&#13;
&#13;
To extend the framework toward long-term, scalable transportation solutions, the thesis compiles a comprehensive set of figures of merits (FoMs) and discusses future model extensions including risks, ISRU, and electric propulsion. Limitations such as lack of reusable configuration flexibility, and insufficient support for Mars mission parameters are identified as areas for future development. This work provides a foundational framework for the exploration of refueling architecture and solid next steps to design sustainable and scalable human space transportation systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162562</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embedded Software-Defined Radio Architectures for 6G Cellular Networks</title>
<link>https://hdl.handle.net/1721.1/162561</link>
<description>Embedded Software-Defined Radio Architectures for 6G Cellular Networks
Urbonas, Jonas
Over the past decades, the widespread adoption of wireless communication technologies in the industrial, scientific, medical, defense, and commercial sectors has resulted in substantial advancements in digital radio technologies. Each new generation of cellular technology, beginning with 1G, has introduced novel use-case scenarios that have challenged the performance of the prevailing digital radio architectures. The newly proposed scenarios for 5G-Advanced, and the upcoming 6G cellular networks due to be standardized by 2030 are no exception. The emerging 6G network components, such as the space-air-ground integrated cell-less networks, as well as the artificial intelligence-native network architecture, drive the demand for flexible and fully reconfigurable radio units supporting multi-GHz instantaneous signal bandwidths, frequency agile radio architectures covering multi-octave frequency ranges, and highly sensitive receivers.&#13;
&#13;
To support these requirements, software-defined radios (SDR) are becoming an essential building block of next-generation radio networks. This thesis presents a review of softwaredefined radio technology, examines its history, proposes the requirements of SDR units for 6G cellular networks, and presents a quantitative performance analysis of over 2 million distinct SDR architectures that could be used in 6G communication networks. It does so by defining the key system architectural decisions and their options, including the data converters, filters, mixer and amplifier technologies. It also examines different radio transmitter and receiver architectural topologies, including baseband sampling, IF sampling, direct RF sampling, and fully digital RFSoC, and constructs a multi-attribute utility (MAU) to quantify the system performance. The MAU is used to build a tradespace of SDR architectures, enabling the identification of the Pareto frontier. Analysis of SDR system architectures on the Pareto frontier reveals that the performance of direct RF sampling SDR architectures is highly competitive with industry-standard IF sampling. The tradespace is also used to analyze the sensitivity of system performance to individual architectural decisions via a main-effect analysis, allowing quantification of connectivity and sensitivity of available architectural decisions. Sensitivity analysis reveals that system performance is highly sensitive to receiver architectural decisions, particularly analog-to-digital converters, indicating the need for continued advances in this technology to produce high-performance SDR systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162561</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Design of Architected Lattices for Construction Applications</title>
<link>https://hdl.handle.net/1721.1/162560</link>
<description>Computational Design of Architected Lattices for Construction Applications
Leamon, Sophie
Architected lattices have been utilized in aerospace and research applications for their modularity, scability, reconfigurability, and high strength-to-weight properties. However, voxels have yet to find widespread integration in the residential or commercial construction industry because of the industry’s distinct system needs. This study identifies the pain points unique to the construction industry that have slowed or disabled the adoption of new practices, highlighting the importance of utilizing known materials, methods, and the transparency of the design process, as major hurdles to adoption of innovation in the industry. This study presents a computational approach to designing architected lattices that seeks to undermine these core issues by making building with architected lattice structures agnostic to material and manufacturing methodology. Three open source computational approaches to architectural design are proposed: 1) integration of support structures for additively manufactured structures; 2) parametric design of voxels from 2D material, their manufacturing molds, and optional alignment features; and 3) generation of two-dimensional cut files for assembly with 3D printable joinery. These files are computationally designed and arranged for instantaneous production to demystify the lattice architectural design process, establish a pathway for utilizing all available materials in lattice construction, reduce the overhead costs for experimentation with lattice structures, and eliminate barriers to the fabrication process by enabling accessible manufacturing methods.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162560</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations</title>
<link>https://hdl.handle.net/1721.1/162559</link>
<description>Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations
Delkowski, Michal  .
This thesis examines the strategic, technical, and economic feasibility of China’s two flagship low Earth orbit (LEO) satellite megaconstellation programs, Guowang and Qianfan, in the context of the rapidly evolving global satellite communication (Satcom) market. Against the backdrop of SpaceX’s Starlink dominance and intensifying geopolitical competition, China’s efforts represent not only a telecommunications infrastructure push but also a broader assertion of technological sovereignty and global influence. This study uses a scenario-based analysis that integrates system throughput analysis and financial forecasting. Three deployment scenarios (base, optimistic, and pessimistic) are analyzed, accounting for satellite production rates, launch capabilities, and regional adoption patterns, particularly across Belt and Road Initiative (BRI) markets. The study also evaluates "system-of-systems" integration with China’s military objectives, and spectrum coordination challenges. Key findings reveal that Guowang becomes marginally viable only in the optimistic scenario, assuming deployment of at least 9,000 satellites, reduced satellite unit costs (targeting ~$300,000 per satellite), expanded gateway infrastructure, and realization of these targets by 2035, while remaining unviable in base and pessimistic cases. Qianfan faces greater commercial risk, achieving viability only with early adoption in BRI countries and government dual-use contracts, incurring a pessimistic-case NPV loss exceeding $76B. Resource allocation problem (RAP) modeling suggests that projected throughput may saturate early without major gateway expansion. Both constellations require China to scale reusable rockets and sustain a combined annual launch rate exceeding 1,000 satellites by the early 2030s. Neither constellation system meets China’s 2030 rural broadband targets under base-case conditions, over 40% of the 336M unconnected citizens remain underserved without terminal subsidies. Ultimately, China’s LEO Satcom strategy depends not on satellite count alone but on coordinated progress in launch economics, affordability, dual-use policy, and international partnerships.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162559</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of CPG budgets in Retailer-led marketing&#13;
programs</title>
<link>https://hdl.handle.net/1721.1/162558</link>
<description>Optimization of CPG budgets in Retailer-led marketing&#13;
programs
Gandhi, Abhinav
Grocery retailers and Consumer Packaged Goods (CPG) companies have a symbiotic relationship. Retailers need CPGs to supply the products, and CPGs need retailers’ customers to grow their brands. Since shelf space is limited, CPGs offer trade and marketing funds to prominently feature their brands.&#13;
As part of loyalty programs, retailers offer coupons to customers that are often funded by CPGs. In return, CPGs expect a return on their investment(ROI). Since budgets are limited and are also expected to be utilized, it becomes a challenge for the retailer to find the right size of a mailer which can balance costs and relevance to customers. This thesis explores how knapsack problems can be used in an non-adaptive setting to help maximize the reach of print and email campaigns.&#13;
Seeking inspiration from existing literature, multiple simulations were set up to evaluate budget-constrained allocation and compare two approaches, the multiple-choice Knapsack (MCK) and a greedy algorithm. Considering uncertainty in redemption, the Newsvendor model was also explored to review the possibility of over-allocation to improve budget utilization and increase reach. The preliminary analysis findings offer promising results and provide a setting for further research.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162558</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploration of design strategies and optimization for efficient mass timber structures as a function of column position</title>
<link>https://hdl.handle.net/1721.1/162557</link>
<description>Exploration of design strategies and optimization for efficient mass timber structures as a function of column position
Gerken, Christoph
The building sector is responsible for a large share in global carbon emissions. As the load bearing structure is particularly material-intensive, a decisive shift can be achieved by improving its design and decreasing its volume. This thesis examines how structural mass timber floor systems can be designed in an efficient, low-waste manner through a design-oriented approach that is immediately applicable within the context of conventional construction techniques and building practices. Reducing material in timber structures has economical and ecological benefits. Reduced timber demand entails significant cost savings and decreased building weight which considerably cuts embodied carbon.&#13;
Since common floor systems mainly act in bending, this work focuses on the reduction of moment forces in standard setups comprised of timber slabs, beams, and columns. In principle, bending forces in beams and slabs can be reduced by moving the supports inwards, leading to overhanging structural elements. The original method presented in this thesis shows how this approach applies to conventional mass timber floor systems. This work provides an understanding of how informed column positioning can take advantage of this behavior and allows for material and embodied carbon reduction trough design. The consequent architectural implications of the resulting irregular column grid are explored in a floor plan design suggestion&#13;
Material demand and embodied carbon are evaluated as a function of column position through finite element analysis and optimization as part of a computational model. By consulting a mass timber manufacturer’s catalogue to assign appropriate products to structural members, this approach enables material reduction in the design process rather than in the production. Bypassing slow-changing, inert fabrication procedures, this method can be realized instantaneously.&#13;
This work identifies the optimal support position to reduce bending forces in beams and slabs to be at 41% of the distance from the element’s edge to its midspan. Furthermore, this research finds that the impact of ideal column position on material efficiency depends on required minimum effective spans. While being negligible in the absence of constraints, informed column positioning can reduce timber demand by 20% and embodied carbon by 16% when subjected to a minimum effective span requirement of 6 m – a common span in timber construction – in a building of 30x30 m and five floors. Building dimensions are found to have an insignificant impact on these results.&#13;
This thesis illustrates the potential for architects and engineers to enhance structural efficiency of mass timber floor systems merely by deviating from the usual, regular column grid and taking advantage of straightforward structural principles through design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162557</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges</title>
<link>https://hdl.handle.net/1721.1/162556</link>
<description>Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges
Fayad, Fred
The assessment of concrete bridge conditions is critical for ensuring structural integrity and public safety. Traditional inspection methods, which rely heavily on visual inspections and manual assessments, are time-consuming, subjective, and prone to human error. With the increasing number of aging bridges worldwide, there is a growing need for more efficient and accurate methods to assess bridge health. This thesis aims to explore the application of machine learning techniques for automating the bridge condition assessment process and improving the accuracy and reliability of bridge evaluations.&#13;
 This study investigates the development and implementation of a model consisting of two machine learning algorithms to predict the condition of concrete bridges based on data collected from various public sources. The first algorithm appraises the structural health of a bridge based on bridge rating and the second algorithm assesses the condition of a bridge after a specific failure mechanism. Specifically, this work focuses on using classification algorithms such as Random Forest (RF), XGBoost, and Neural Networks (NN) in both algorithms to achieve their purpose.&#13;
 The results of this study demonstrate that machine learning models can provide a decent performance in predicting bridge conditions. The overall model achieved a testing accuracy of 79%. This research contributes to the field of civil engineering by showcasing the potential of machine learning in infrastructure management. By automating the assessment process, the proposed models can help reduce the time and cost of inspections while providing more accurate data to guide maintenance planning and bridge rehabilitation efforts. Future work will focus on further optimizing the models, incorporating additional data sources, and deploying the system for real-time bridge monitoring.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162556</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics</title>
<link>https://hdl.handle.net/1721.1/162555</link>
<description>A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics
Van Note, Lana
Nutrient cycling is an important component of plants’ immune systems, largely driven by the act of exuding environmentally influential metabolites from roots. Root exudation may be driven by multiple unique mass-transport mechanisms, including active and passive transport types, though the latter is not well-studied despite being labelled a significant driver of low molecular weight metabolite exudation. This research investigates the generally accepted assumption that low molecular weight metabolites, including iron-fixing coumarins (scopoletin, fraxetin, etc.) are primarily exuded passively,  and high molecular weight metabolites follow an active exudation approach. Scopoletin and scopolin exudation from Arabidopsis thaliana in low-iron and replete conditions is quantified to determine if the hypothesized passive diffusion mechanism is a significant contributor to coumarin exudation. LC-MS analysis suggests that passive diffusion of scopoletin and scopolin from roots plays a significant role in total coumarin exudation values. Further research should include investigating the implications of passive coumarin exudation on long-term iron storage and soil health in addition to the relationship between coumarin production and exudation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162555</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center</title>
<link>https://hdl.handle.net/1721.1/162554</link>
<description>Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center
Athanasopoulos, Panagiotis Rafail
This thesis presents the conceptual design, technical modeling, and economic analysis of a novel offshore floating solar energy system integrated with Compressed Air Energy Storage (CAES) for reliable baseload power delivery to coastal data centers. The system architecture is modular, consisting of multiple “powercells,” each comprising a 5×5 photovoltaic (PV) array mounted above a matrix of submerged compressed air storage cylinders anchored below the floating platform, addressing the energy resilience and spatial constraints of coastal computing infrastructure. This scalable configuration enables distributed energy collection and localized storage, tailored to meet site-specific demands. Detailed thermodynamic modeling of both charging and discharging cycles is conducted, with analytical solutions validated against a full numerical implementation. Results show that under realistic operating assumptions, the temperature inside the storage vessels remains nearly isothermal due to the long charging duration and large heat exchange surface, enabling a simplified energy balance model.&#13;
&#13;
A techno-economic analysis evaluates both structural steel requirements and photovoltaic investment, benchmarked against market data from 2024. Key metrics such as structural cost per unit energy ($/kWh) and per rated power output ($/kW) are derived. The hybrid system is found to be economically competitive with lithium-ion (Li-ion) battery alternatives, offering extended lifespan (20–30 years), lower material costs, and enhanced sustainability through avoidance of critical minerals. Environmental and mooring considerations for offshore deployment are also addressed, demonstrating the feasibility of integrating energy generation, storage, and maritime infrastructure. This work advances the development of resilient, decarbonized energy systems aligned with global renewable energy targets and the rising demand for sustainable data center operations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162554</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis</title>
<link>https://hdl.handle.net/1721.1/162552</link>
<description>Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis
Brower, Braden C.
United States Navy Refueling and Complex Overhauls (RCOHs), and other extended maintenance availabilities, present uniquely demanding environments where Sailors face elevated risks for destructive behaviors, including suicide and substance abuse. Prolonged exposure to harsh industrial conditions, significantly degraded Quality of Service, demanding workloads, and critical manning shortfalls create cumulative stress distinct from operational duty. These destructive behaviors severely impact personnel’s well-being, erode force readiness through attrition and morale issues, and indicate systemic contributing factors as highlighted by recent investigations into carrier suicides during shipyard periods.&#13;
&#13;
This thesis utilizes Causal Analysis based on Systems Theory (CAST), grounded in systems thinking, to analyze the USS George Washington RCOH events and identify the underlying safety control structure flaws that contributed to this hazardous environment. Insights from the CAST analysis were then integrated with a qualitative System Dynamics model to better understand the feedback loops and dynamic interactions driving system behavior, particularly revealing a capability trap dynamic exacerbated by resource constraints and personnel pressures.&#13;
&#13;
The analysis identified critical, interacting systemic flaws across multiple organizational levels that contributed to the accident: (a) inadequate strategic resourcing and manning prioritization for RCOH personnel support, (b) deficient planning, risk management, and oversight processes that were ineffective at protecting Sailor well-being amidst budget and schedule pressures, (c) ineffective feedback mechanisms that prevented critical information from reaching decision-makers, (d) and reliance on flawed assumptions regarding the RCOH environment, Sailor resilience, and standard process adequacy. Based on these findings, the thesis provides actionable, systemically focused recommendations aimed at strengthening the Navy's safety control structure by improving decision makers’ mental models, enhancing feedback and oversight, enforcing well-being constraints, and fostering organizational learning. Combined, these recommendations empower leaders to proactively manage risks, reduce destructive behaviors, and ensure a safer, more resilient environment during future RCOHs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162552</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers</title>
<link>https://hdl.handle.net/1721.1/162551</link>
<description>Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers
Hoyt, Thomas S.
Flooding events pose a significant and growing threat to communities in the United States, particularly as climate change alters weather patterns and sea levels continue to rise. This thesis examines how the U.S. Army Corps of Engineers (USACE) can enhance community preparedness for flood emergencies through improved risk communication strategies. Focusing on the New England District as a representative case, it integrates data from the Federal Emergency Management Agency’s (FEMA) National Household Survey and the National Flood Insurance Program (NFIP) claims archive to develop and calibrate a System Dynamics model of flood risk perception and preparedness.&#13;
The model built in this thesis incorporates key variables and captures the feedback loops that influence community preparedness over time. Scenario testing demonstrates that monthly to quarterly engagements by USACE help sustain risk awareness and reduce flood-related damage, whereas less frequent engagement demonstrates minimal improvement above the baseline. By contrast, barriers to action, such as complex procedures or limited access to information, can substantially slow the adoption of preparedness measures. High levels of trust in authorities further amplify the effectiveness of risk communication and foster community engagement.&#13;
This model quantifies the importance of frequent engagement, low barriers to action, and trust-building initiatives in reducing flood impact. Through calibration against historical claims and survey data, the model provides a robust framework that can guide USACE and partner agencies in refining their own flood risk communication strategies to bolster community resilience.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162551</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement</title>
<link>https://hdl.handle.net/1721.1/162550</link>
<description>Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement
Stribos, Sophia
Concrete remains one of the most widely used construction materials due to its strength, durability, and availability. However, it is responsible for a large share of the global carbon emissions. Within the 40% of the global emissions attributed to the building sector, 5-8% alone accounts for the production of cement, a key component in concrete. As the construction industry seeks innovations towards sustainable practices, alternative beam designs that improve material efficiency and introduce nontraditional reinforcement systems are emerging as promising potential. However, accurate structural models capable of predicting and validating the performance of these innovative beams are often lacking, limiting their implementation in the industry, primarily due to safety and code compliance.&#13;
This thesis bridges this gap by developing and validating a structural engineering model to predict the shear and flexural capacities and the deflection of irregular, efficiently shaped concrete beams, including those with alternative reinforcement and formwork. The model discretizes a 3D beam geometry into 2D sections to perform a geometric and structural cross-sectional analysis along the beam’s length. The structural engineering model is applied to two case studies: a topology-optimized steel-reinforced concrete beam and an integrated knit textile reinforced concrete beam, using experimentally measured material properties and beam testing data. The predicted engineering model results are compared against experimental data to validate the model’s accuracy.&#13;
While the model could accurately capture the behavior of the topology-optimized steel-reinforced beam, it slightly overestimated the strength of the knit-textile reinforced beam. The engineering model for the topology-optimized beam had a close alignment in flexural capacity and had a slightly conservative estimate in shear and deflection due to the nature of the design equations. However, the model showed a minor overprediction in the flexural capacity and deflection of the integrated knit textile beam. Discrepancies in this model were linked to inaccurate material properties, experimental imperfections, and prestressing effects. To ensure complete accuracy and reliability, additional beam analysis using this model is needed.&#13;
This research advances structural design by offering a tool for predicting the capacity and serviceability of irregular, efficiently shaped concrete beams, including those with alternative reinforcement. This thesis enables designers to validate and optimize their innovative beam designs and support their ideas as sustainable solutions within the concrete construction industry.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162550</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment of Decarbonization Pathways of Japan</title>
<link>https://hdl.handle.net/1721.1/162549</link>
<description>Assessment of Decarbonization Pathways of Japan
Suto, Sadami
Developing realistic pathways for decarbonization is crucial for the success of climate change mitigation actions. To evaluate Japan’s pathways toward achieving carbon neutrality, this study enhances the MIT Economic Projection and Policy Analysis (EPPA) model and analyzes a suite of policy scenarios that combine domestic mitigation measures such as emissions targets from the updated Japan’s Nationally Determined Contribution (NDC), power mix goals, and availability of carbon capture and storage (CCS) with international emissions trading. The impacts on CO₂ emissions, GDP, consumption, carbon prices, and sectoral output in Japan between 2030 and 2050 are assessed.&#13;
&#13;
Under the baseline scenario, emissions over time remain flat at about 1,000 MtCO₂e, far exceeding the carbon neutrality goal. Even when Japan’s 2030 and 2040 NDC for CO₂ and power mix targets are fully achieved, residual emissions of 100 – 200 MtCO₂e remain, which calls for a need of carbon offsets. Relying on domestic-only measures is costly for Japan. In high-ambition domestic-only scenarios without CCS, carbon prices soar to over $46,000/tCO₂ by 2050, leading to GDP losses exceeding $1.5 trillion (23% of GDP) and significant contractions in key sectors of the economy.&#13;
&#13;
In contrast, scenarios incorporating international emissions trading enable Japan to achieve comparable total emissions reductions by partially relying on imported carbon credits. This mechanism significantly lowers marginal abatement costs, allowing carbon prices to stabilize at $20 –$30/tCO₂ and reducing GDP losses to about $100 billion (1.6% of GDP) by 2050.&#13;
&#13;
Scenarios that emphasize domestic reductions while flexibly using international credits emerge as manageable pathways. These scenarios achieve domestic emissions reductions of 40 – 60% by 2050, with carbon prices ranging from $140 to $340/tCO₂ and GDP losses contained between $150 and $290 billion (2.3% and 4.3% of GDP). Importantly, these scenarios incorporate the deployment of CCS, which plays a critical role in reducing marginal costs and enabling deeper abatement in hard-to-decarbonize sectors. Most industrial sectors maintain stable output, while carbon-intensive sectors undergo gradual structural transitions.&#13;
&#13;
Overall, these findings suggest that Japan can achieve carbon neutrality through an integrated strategy that combines strengthened domestic action, technological deployment, and international cooperation. This study provides a robust quantitative foundation for designing feasible, equitable, and cost-effective climate policies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162549</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience</title>
<link>https://hdl.handle.net/1721.1/162548</link>
<description>Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience
Ren, Daisy
Due to the rise in global traffic patterns in recent years, bridge failures due to impact effects are becoming an increasing concern, especially for aging infrastructure. Following the recent collapse of the Francis Scott Key Bridge, issues regarding bridge vulnerabilities and design deficiencies arose, which highlighted the need for better design codes and protection for bridge piers. This study aims to address these issues by better understanding bridges' impact-related structural failure mechanisms by developing a comprehensive optimization framework to enhance the resilience of structures to dynamic impact forces using three phases: (i) statistical analysis of bridge failure data from the Multidisciplinary Center for Earthquake Engineering Research (MCEER), with data focusing on the frequency, bridge types, and bridge material trends associated with different bridge failures across the United States, (ii) development of a compliance-based optimization for trusses using MATLAB that is applied to 2D representations of pier structures for different truss configurations (2X3, 3X4, 3X5) under stress, load, and volume constraints to simulate large magnitude impact conditions, and (iii) design and validation of optimization results through mathematical calculations of compliance and strain energy to ensure consistency between numerical results and structural mechanics principles. Both fail-safe and shape optimization strategies are employed and compared across all truss configurations, revealing distinct design methodologies between maximum and minimum compliance optimizations and the trade-offs between stiffness and energy dissipation. Maximum compliance optimization designs demonstrate increased redundancy and strain energy capacity, while minimum compliance optimization designs showed increased efficiency but were more prone to brittle failure. The final study utilizing volume constraints further examined material distribution under realistic impact loads and highlighted the importance of distributed load paths and deformation capacity in structural performance. This work provides a design framework for energy-absorbing pier geometries and aims to offer insight into improving current design standards for pier designs to account for extreme events and help guide retrofitting efforts that could prevent future failures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162548</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computing Economic Equilibria and Their Applications to&#13;
Market Games</title>
<link>https://hdl.handle.net/1721.1/162547</link>
<description>Computing Economic Equilibria and Their Applications to&#13;
Market Games
Bruce, Samuel G.
The emergence of new technologies such as e-payments and tokenized assets, distributed ledgers, smart contracts and encryption have created new opportunities for improving access and equity in financial institutions. These new tools can be used to build better infrastructure and improve economic efficiency, especially in previously underdeveloped countries. The use of these tools in various applications however requires and intimate link between economics and computer science to ensure an implementation that is both computationally efficient and improves social welfare. There has been significant research in the field of computer science concerning the computation of economic equilibria, specifically Nash Equilibria and Correlated Equilibria. These algorithms, however, have not been used in many financial applications. Further, while research exists on various methods of computation for Correlated Equilibria, little exploration has been done evaluating the quality of these equilibria in terms of economic efficiency in specific mechanisms. This work provides a sweeping view of the existing literature on equilibrium computation as well as an analysis on the economic and algorithmic tradeoffs of different approaches. The discussion begins with simple 2-player, finite action games, then moves to more complex machine learning based method for equilibrium computation in difficult settings. One of these methods is then extended to a limit-order market game explicitly described by Dubey [1] and implemented, with small modifications, by SPEEDEX [2]. This limit-order game offers a continuous, vector-valued action space with complex payoff functions, causing tension with many of the equilibrium computation algorithms explored previously. This paper identifies these tensions, then offers modifications to algorithms which allow tractable, welfare improving approximate Coarse Correlated Equilibrium computation. Finally, there is a discussion on future work which aims to generalize the developed framework. The code corresponding to the equilibria computation will be released publicly in this repository [3].
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162547</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads</title>
<link>https://hdl.handle.net/1721.1/162546</link>
<description>Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads
Chang, Ryan
SigmaOS is a multi-tenant cloud operating system designed for efficient orchestration of fault-tolerant, burst-parallel workloads. It provides users with isolated cloud environments called realms, where resources are accessed through a Unix-like filesystem interface, and supports applications built from procs—lightweight, rapidly-spawnable programs that can be both short-lived for bursty tasks or long-running and stateful for persistent services. However, the current prototype exhibits performance bottlenecks that hinder its scalability for larger, more demanding applications. This thesis addresses these limitations by introducing two key optimizations: (1) a rearchitected watch API, enhancing its efficiency and scalability for monitoring directory changes crucial for inter-proc coordination and event notification, and (2) a new ft/task server, providing a robust and high-performance mechanism for managing fault-tolerant bags of tasks, essential for applications like MapReduce. Through these enhancements, this work demonstrates significant improvements in SigmaOS’s performance on the MapReduce benchmark, showcasing improved scaling capabilities for larger cluster deployments, larger inputs, and more granular tasks. These optimizations are crucial steps towards enabling SigmaOS to effectively realize its vision as a scalable and performant platform for complex cloud workloads.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162546</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes</title>
<link>https://hdl.handle.net/1721.1/162543</link>
<description>Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes
Gomez, Samuel John
When faced with complex disturbances, continuous manufacturing processes require robust control and adaptability to maintain product quality and operational efficiency. Although advanced control strategies such as linear quadratic regulator, model predictive control, and adaptive control have demonstrated strong performance, many industrial processes still rely predominantly on classical proportional-integral-derivative (PID) controllers because of their simplicity, ease of implementation, and sufficient results.&#13;
&#13;
This thesis investigates the effectiveness of data-driven modeling techniques in capturing system dynamics more accurately than traditional physics-based models. It further examines using a high-fidelity digital twin, constructed from experimental data via linear system identification and nonlinear deep learning (NARX) approaches, to optimize PID controller parameters through simulation-based gradient descent methods.&#13;
&#13;
A comprehensive experimental platform was developed to collect synchronized sensor and video data from a roll-to-roll continuous manufacturing system, specifically targeting disturbance scenarios that cause process interruptions. The digital twin created from these data was validated against physical experiments and shown to outperform conventional physics-based models when predicting the system’s dynamic response to disturbance inputs.&#13;
&#13;
Optimal control of the system was explored by implementing a virtual PID controller that closely replicates the physical controller. Optimal gain settings were identified through simulation and applied to the physical manufacturing process. The experimental results showed a significant reduction in the mean squared error and the maximum web deviation. These results demonstrate the substantial potential of digital twin-driven, data-centric control approaches in enhancing resilience, efficiency, and adaptability in manufacturing processes. This research also lays the foundation for the future development of real-time, adaptive, and autonomous control strategies in industrial applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162543</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse</title>
<link>https://hdl.handle.net/1721.1/162542</link>
<description>Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse
Maruyama, Shun
This paper analyzes Japan’s economic and technological history since the Meiji Restoration through the framework of Power and Progress proposed by Acemoglu and Johnson (2023), focusing on the concepts of direction of technology and productivity bandwagon. A historical review reveals that technological progress and the distribution of its benefits were not determined solely by market mechanisms or technological inevitability, but were shaped by the power dynamics among governments, companies, workers, and others. Periods when workers held strong bargaining power and inclusive social institutions were in place saw the emergence of a virtuous cycle, in which the direction of technology moved toward broad-based innovation and the productivity bandwagon functioned effectively. Conversely, after the collapse of the bubble economy, a shift in the power balance in favor of companies led to a rise in short-term cost-cutting, resulting in a divergence from inclusiveness and innovation in the direction of technology, as well as a breakdown of the productivity bandwagon. This ultimately undermined Japan’s ability to leverage the strengths of its production system and led to a decline in technological capabilities. Currently, a new wave of technological innovation centered on AI is emerging. However, its impact remains heavily dependent on existing employment practices and corporate behavior models, making a short-term shift in direction unlikely. In the medium-to-long term, however, the societal will and collective action may create an opportunity to rebuild a virtuous cycle. This paper proposes action guidelines for companies, workers, and the government, and argues that realizing true prosperity from technological progress requires reassessing existing power structures and actively choosing new pathways as a society.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162542</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog</title>
<link>https://hdl.handle.net/1721.1/162541</link>
<description>Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog
Chan, Martin
The Language Server Protocol (LSP) and popularity of VS Code have facilitated the current ubiquity of smart code editing features like hover or goto-definition. These features are powered by language servers, which are programs that perform compiler-like functions at keystroke latency on potentially incomplete code. Mainstream languages like Rust or Python have the large userbases to motivate the creation of bespoke language servers like Rust Analyzer or Pylance. However, smaller languages like Bluespec SystemVerilog, used in computer architecture classes at MIT, often need to make do without a language server. As students come to expect smart code editing features, they may miss the convenience when working with languages like Bluespec. In this thesis, we present a Bluespec Language Server forked from Rust Analyzer. This involved adapting the Rust Analyzer parser, HIR, and other internals to work for Bluespec SystemVerilog. The resulting artifact supports the full suite of typical smart editing features for classroom-grade Bluespec projects and continues to mostly work for industrial-grade projects. We discuss the many changes and challenges required to adapt this language server to work for a different language than it was designed for. Further, to address the current gap in the literature covering language server implementation, we include thorough discussion of the overall system architecture and several important subsystems with significant overlap with Rust Analyzer's internals. Finally, we conclude with a discussion of current limitations of our language server and directions for future work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162541</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout</title>
<link>https://hdl.handle.net/1721.1/162540</link>
<description>A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout
Andrade, Marco A.
Hydrogen gas (H₂) is considered a promising source of environmentally friendly and sustainable energy of benefit for global decarbonization. However, given the flammable and explosive nature of H₂, highly sensitive and selective detection systems with fast response are needed to enable leakage monitoring to ensure safe deployment and use. To address this need, we propose a microelectromechanical (MEMS) platform for H₂ sensing with the aim of achieving sub-1-ppm sensitivity. Our platform employs a MEMS structure that has H₂-responsive palladium (Pd) features. Once exposed to H₂, the Pd lattice expands as H₂ diffuses into it. This results in the structural deflection of a mechanically-mobile feature, in particular a cantilever. This deflection is measured using piezoresistors, which are embedded in the cantilever using a spin-on glass doping process. Piezoresistors enable rapid high-accuracy detection and quantification of H₂, as will be shown in this thesis through a combination of modeling, sensor development, sensor fabrication, and basic experimental characterization. In this thesis, we have successfully developed a fabrication plan, demonstrated the two key aspects of our fabrication, namely beam release and piezoresistor fabrication, shown beam bending driven by absorption of hydrogen by palladium, and shown that our piezoresistors respond to beam bending. Our physical results match our theoretical predictions for a beam of size 100 µm by 20 µm and a resistor with resistance 115 kΩ fabricated on SOI chips. This beam could be used to detect H₂ below 1 ppm.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162540</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool</title>
<link>https://hdl.handle.net/1721.1/162539</link>
<description>Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool
Dale, William
The convergence of artificial intelligence and entrepreneurship education has opened a novel frontier in pedagogical innovation. The deployment of Orbit—a bespoke generative AI tool—within MIT’s 15.390 entrepreneurship course, which follows the structured Disciplined Entrepreneurship framework, is examined through a System-of-Systems perspective. This approach reveals how the tool functions not as an isolated feature but as an integrated element within a multifaceted educational ecosystem. Drawing on quantitative usage data across three consecutive academic semesters (Spring 2024-Spring 2025) complemented by course evaluation metrics, our mixed-methods approach reveals the multidimensional impact of AI-enhanced entrepreneurial education. The findings demonstrate that Orbit, particularly in its refined v2 iteration, functions as a powerful External Enabler that significantly reduces both the opacity and agency-intensity inherent in complex entrepreneurial frameworks. This enabling function manifested through measurable increases in student adoption, idea generation, and iterative engagement with critical DE steps. Beyond efficiency gains, we identify a substantive Transformation of Learning where students developed distinctly different engagement patterns—characterized by increased iteration, greater willingness to tackle complex entrepreneurial challenges, and enhanced overall course experiences. This transformation appears to deepen rather than merely accelerate learning, as evidenced by improved course evaluations alongside increased time investment in coursework. However, our analysis reveals that this transformation operates within the constraints of what we term AI’s "Jagged Frontier"—an uneven landscape of capabilities leading to differentiated impacts across DE tasks and student segments. The evolution from Orbit v1 to v2 underscores how thoughtful system design and curriculum integration critically influence the effectiveness of educational AI tools. This research contributes a nuanced understanding of how specialized AI tools can enhance entrepreneurship education while highlighting that their benefits depend on deliberate design choices, strategic pedagogical integration, and recognition of current technological limitations. The SoS framework proves instrumental in capturing these emergent dynamics, offering valuable insights for educational technologists, entrepreneurship educators, and institutions navigating the AI-enhanced learning landscape.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162539</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band</title>
<link>https://hdl.handle.net/1721.1/162538</link>
<description>System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band
Alsehali, Mohammed S.
This thesis presents a system design framework for evaluating spectrum management architectures enabling co-primary access in the 37 GHz band. Motivated by increasing demand for mid-band and mmWave spectrum, and recent policy directions for federal-commercial sharing, this research investigates the trade-offs between utilization efficiency, coordination overhead, and interference performance across thousands of feasible spectrum management system.&#13;
&#13;
Using a morphological matrix, eight key architectural decisions were defined, including coordination topology, licensing mechanism, frequency planning, sensing mode, and access priority. A parametric event-driven simulation model was developed in Python to evaluate 2,808 valid architectures under low, medium, and high spectrum demand scenarios. The performance metrics, Spectrum Utilization Efficiency (SUE), Coordination Index (Cindex), and Blocking Probability (BP), were used to generate multi-dimensional tradespaces and identify Pareto-optimal solutions.&#13;
&#13;
Results indicate that semi-dynamic spectrum management systems with decentralized or hybrid coordination topologies consistently dominate the Pareto frontier across all demand levels. Compared to fully dynamic systems, semi-dynamic designs achieve 80–90% of the utilization efficiency with way less than 50% of the coordination cost. &#13;
&#13;
The results validate key hypotheses about performance trade-offs and offer actionable insights for regulators and system designers. This thesis recommends semi-dynamic, co-primary frameworks for initial 37 GHz implementation and proposes future research directions, including agent-based modeling, economic behavior integration, and acuarate physics modeling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162538</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology</title>
<link>https://hdl.handle.net/1721.1/162537</link>
<description>Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology
Jezewska, Martyna
The Mayo Clinic, a renowned non-profit organization, has long been at the forefront of healthcare innovation. This thesis explores the implementation of digital pathology within the Mayo Clinic, focusing on its potential to enhance diagnostic accuracy, increase efficiency, enable remote collaboration, and ultimately improve patient care. By leveraging the Architecting Innovative Enterprise Strategy (ARIES) framework, this research provides a comprehensive analysis of the socio-technical aspects of digital pathology implementation. The study begins with a literature review on innovation and its application in healthcare,&#13;
followed by an in-depth case study of the Mayo Clinic's journey with digital pathology. Key findings highlight the importance of organizational design, stakeholder engagement, and continuous improvement in successfully integrating digital pathology into existing healthcare systems. The research concludes with recommendations for future innovations and insights on how healthcare institutions can better prepare for and adapt to disruptive technologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162537</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration</title>
<link>https://hdl.handle.net/1721.1/162536</link>
<description>Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration
Suresh, Nithyaharini
The rapid increase in wind energy deployment is critical to achieving net-zero carbon emissions in the United States. However, conventional Horizontal Axis Wind Turbines (HAWTs) face deployment constraints due to their large spatial requirements stemming from their size itself and turbine spacing to accommodate wake interference. Their large footprint makes it impractical to deploy in densely populated and restricted areas, such as military zones and urban regions. This setback results in the underutilization of available wind resources, limiting wind energy’s full potential. To overcome these constraints, Vertical Axis Wind Turbines (VAWTs) offer a spatially compact alternative, enabling deployment in space-constrained areas. This study investigates the feasibility of VAWTs as a complementary wind technology by integrating them into a renewable energy siting optimization framework. This framework considers HAWTs, Solar Photovoltaics (PV), battery storage, etc., within the New England region, assuming a 100% decarbonized power system. The model utilizes an analysis that aims to minimize total system costs to assess VAWTs under varying capital expenditures and land-use restrictions. A novel feature of this study is the usage of the land availability cutoff and land restriction cases that are introduced to realistically mimic real-world land use constraints that influence wind turbine siting. The land availability cutoff defines the minimum area of land usable within the parcel for it to be considered for HAWTs and Solar PV deployment, given their larger spatial footprint. Parcels below this land cutoff are excluded from those technologies and only consider VAWTs due to the lower land available within the parcel, representing constrained regions. This methodology offers a more technical modeling of spatial constraints for renewable energy siting and allows for a realistic assessment of VAWT feasibility. Results indicate that, at current commercial costs, VAWTs are less competitive withm HAWTs and solar PV, primarily due to their early stage in the technology development and their significantly higher CAPEX, which is approximately ten times that of HAWTs. To test the technology’s viability with hypothetical utility-scale costs, where VAWT costs fall within the range of $1,300–$1,500/kW, the model still preferentially selects HAWTs due to their higher capacity factors. However, when the model considers different land use restriction cases for VAWT technology, as compared to HAWTs and Solar PVs, VAWTs become significantly more viable. VAWT placement becomes notable in these cases, increasing its share in the energy mix by 2.61% to 10.32% in favorable conditions. At high levels of land availability on a per-parcel scale, specifically, when more than 70% of the land identified as technically suitable remains available for any deployment, high-quality sites with favorable wind resources and high capacity factors continue to support HAWTs as the dominant technology given their lower Levelized Cost of Energy (LCOE). However, when the land availability cutoff increases beyond 70%, reducing siting opportunities for HAWTs and solar PV, the reliance is shifted towards VAWTs, amplifying the impact of their higher LCOE on overall system costs and making cost differentials between technologies more critical. These findings emphasize that while CAPEX reductions are critical in scaling VAWTs and driving up their competitiveness, land-use policies and spatial constraints are primary determinants of deployment feasibility. The study highlights the need for targeted policy intervention for flexible siting policies and continued research to optimize VAWT deployment strategies, ultimately enhancing wind energy integration in land constrained regions within New England and maximizing wind resource potential.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162536</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Targeted Codon Optimization and Translation with Deep Learning</title>
<link>https://hdl.handle.net/1721.1/162535</link>
<description>Computational Targeted Codon Optimization and Translation with Deep Learning
Chemparathy, Anugrah
Codon optimization—the task of recoding a protein’s underlying DNA sequence to maximize expression in a target organism—is a complicated biological optimization problem. Each gene brings a dynamic combination of local and long-range dependencies along with globally imposed constraints specific to the organism. While most existing tools for systematic codon optimization are restricted to optimizing under the constraint of a fixed amino acid sequence, recent architectural advancements in deep learning have made it possible to introduce partial modifications to the amino acid sequence without affecting protein function during the codon optimization process. Such approaches greatly increase the search space of feasible sequences, potentially opening up pathways to previously unconsidered DNA sequences with significantly greater expression rates. In this thesis, we seek to understand and improve the inverse-folding codon optimization model CodonMPNN, the behavior and performance of which have not yet been fully evaluated. We present a detailed empirical evaluation of CodonMPNN, characterizing its performance across reconstruction and translation tasks and demonstrating that it captures higher-order codon usage patterns. We produce evidence that the CodonMPNN’s training has successfully captured nontrivial aspects of the codon distribution for 1000 unique organisms, and are able to better characterize the optimal tasks that CodonMPNN’s non-synonymous nature may be able to solve. Then, by a combination of improved pretraining and a new inference-time evolutionary algorithm we are able to modestly improve the base performance of CodonMPNN from its original publication. Together, these contributions yield a measurable improvement in CodonMPNN’s practical performance and provide actionable guidance for its application in constrained codon design. More broadly, this work highlights the importance of application-aware evaluation when deploying machine learning models in synthetic biology and motivates the design of future architectures that are better aligned with real-world usage constraints.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162535</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The First Signs of Vision</title>
<link>https://hdl.handle.net/1721.1/162534</link>
<description>The First Signs of Vision
Chang, Cathy
There has been a lot of research on the evolution of eyes through the lens of biology; however, there have been a distinct lack of research in simulating what animals saw as their eyes evolved. This project aims to create interactive simulations of the evolution of animal visions from the Cambrian Explosion to present day through the use of extended reality (XR) environments. Our goal is to communicate and educate about the evolutionary timescale to help our audience understand 1) the history of vision and intelligence and 2) how vision came to be and why it is the way it is. In addition, we want to bridge the gap between technology and vision research to help people better understand and visualize this evolutionary process. We have also collaborated with the Museum of Science and the MIT Museum to display this work in events at their venues.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162534</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind</title>
<link>https://hdl.handle.net/1721.1/162533</link>
<description>The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind
Bentley, Sarah
Generative models have rapidly advanced in their ability to produce diverse, high-quality outputs. Yet their practical utility often falls short: users frequently struggle to guide models toward desired outputs, even when the model is capable of producing those outputs. This thesis argues that unlocking the full potential of generative AI requires not only improving what models can produce (producibility), but also how effectively users can guide them toward producible outputs (steerability). In short, how can we make the entire producible sets of generative models easily accessible to humans? Our contributions are fourfold. First, we formally define steerability and introduce a framework for evaluating it independently of producibility. Second, we instantiate this framework through benchmarks on the steerability of text-to-image and language models. We find that not only is steerability poor, but steering doesn’t reliably improve with more attempts. Third, we propose a framework for designing and optimizing steering mechanisms – tools that help users articulate and achieve their goals with models – and introduce Reinforcement Learning for Human Steering (RLHS) to systematically optimize these mechanisms. Finally, we instantiate this framework in a new steering mechanism for image generation that enables users to steer via images rather than text prompts. This mechanism achieves over 2x improvement over traditional text-based prompting on our benchmark. Our mathematical framework provides a generalizable path forward for measuring and improving the steerability of generative models, while our implementations of that framework empirically demonstrate its utility and viability. Overall, we define a new axis – steerability – upon which we can vastly improve generative models not only as tools for automation, but as bicycles for the mind.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162533</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lentiviral Vector Engineering for High-Throughput Immune Profiling</title>
<link>https://hdl.handle.net/1721.1/162532</link>
<description>Lentiviral Vector Engineering for High-Throughput Immune Profiling
Dobson, Connor S.
The ability to decipher immune recognition is critical to understanding a broad range of diseases, including cancer, infection, and autoimmunity, as well as for the development of countermeasures such as vaccines and immunotherapy. Efforts to do so have been hampered by a lack of technologies that are capable of scaling to simultaneously capture the complexity of the adaptive immune repertoire and the landscape of potential antigens. Each individual’s immune repertoire consists of tens of millions of unique receptors that are responsible for surveying the trillions of possible antigens that might be encountered in one’s lifetime. As a result, there has been intense focus on the development of tools for screening large antigen sets or large collections of potential immune receptors, but most of these capture complexity on only one side of the interaction. We have therefore used synthetic virology approaches to engineer a “lentivirus surface display” platform capable of screening complex antigen mixtures against the full complexity of the adaptive immune repertoire. In Chapter 2 of this thesis, we describe our molecular engineering approaches that enabled the development of VSVGmut, an efficient and modular targeted pseudotyping strategy. In Chapter 3, we leverage VSVGmut and further advances to enable one-pot library on library antigen identification screens for T cells by displaying antigens on the surface of lentiviruses and encoding their identity in the viral genome. Antigen-specific viral infection of cells allows readout of both antigen and receptor identities via single-cell sequencing. In Chapters 4 and 5, we extend our approaches to B cells and present preliminary data for applications in both cellular and humoral profiling. Taken together, our approaches represent a new class of tools for identifying the molecular targets of the adaptive immune response at scale.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162532</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA</title>
<link>https://hdl.handle.net/1721.1/162531</link>
<description>Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA
Suzuki, Wataru
In Japan, the Tokaido Shinkansen, a major high-speed rail corridor, plans to introduce Grade of Automation 2 (GoA2) through Semi-Automatic Train Operation (STO). While partial automation promises advantages such as reduced driver’s workload and enhanced efficiency, it also creates new risks due to increasingly complex interactions among automated control systems, human operators, and physical infrastructure.&#13;
This thesis aims to systematically identify and address potential hazards arising from STO in high-speed rail. By using the Tokaido Shinkansen’s announced plan as a model case, the research seeks to uncover scenarios in which normal, non-failed system behaviors can still lead to unsafe outcomes, and to propose design solutions that mitigate those risks early in development. To achieve this, the study applies Systems-Theoretic Process Analysis (STPA). Rather than isolating hardware and function failures, STPA models the entire system as a hierarchical control structure, examining each controller’s possible unsafe actions and their feedback pathways. &#13;
The analysis reveals hazard scenarios that traditional failure-based methods might overlook. Examples include cases where a passenger is not detected between the train and platform doors at departure, or where verbal and signal instructions conflict and delay the driver’s response. These scenarios can happen even without any component failure. Drawing on these insights, the thesis recommends a variety of design improvements, such as new monitoring functions for subsystems, modifying instruction interfaces, and strengthening the software logic of automation systems.&#13;
These findings demonstrate the value of conducting a holistic safety analysis using STPA at the conceptual design stage, before late-stage changes become more expensive. Moreover, this research provides a comprehensive, system-level railway hazard analysis, and the proposed measures can be broadly applicable to high-speed rail systems with automation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162531</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps</title>
<link>https://hdl.handle.net/1721.1/162530</link>
<description>Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps
Taylor, Benjamin F.
The efficient generation and transfer of energy in the golf swing has long been a subject of biomechanical interest, with a particular focus on the concept of the kinematic sequence, which is the coordinated segmental rotation of the pelvis, torso, arms, and club.  While previous studies have modeled aspects of this sequence using high-end laboratory setups or proprietary systems, few have provided open, quantifiable, and time-resolved measurements of angular kinematics across the full swing cycle.  This thesis seeks to address this gap by implementing a markerless temporal skeletal tracking approach built on the open-source MeTRAbs computer vision framework to model and measure joint angles and angular velocities throughout the golf swing.  Using two-dimensional video footage of right-handed golfers performing driver swings, the MeTRAbs pose estimation model and supplemental cross-frame temporal motion sequencing code were used to reconstruct three-dimensional joint trajectories and compute rotational kinematics of key body segments.&#13;
This study demonstrates the feasibility of using markerless pose estimation to extract golf swing signatures and angular velocity profiles without requiring expensive or inaccessible motion capture equipment. Preliminary analysis suggests that joint coordination patterns and temporal characteristics of body segment angular velocities may reveal quantifiable insights into the kinematic sequence, laying the groundwork for further research and instructional applications. Ultimately, this thesis contributes a replicable and cost-effective framework for analyzing golf swing biomechanics using open-source tools and computer vision.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162530</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Opportunities in Advanced Wireless Integrated Circuits</title>
<link>https://hdl.handle.net/1721.1/162529</link>
<description>Opportunities in Advanced Wireless Integrated Circuits
Fareed, Mo
The continued evolution of wireless communications, novel compact radars, and power electronics has driven demand for high-performance semiconductor materials capable of operating at higher power density, fast switching speeds, and improved efficiency. Gallium Nitride (GaN) has emerged as a leading candidate due to its superior electrical properties compared to traditional silicon (Si), silicon carbide (SiC), and gallium arsenide (GaAs). GaN’s high power density, thermal stability, and high-frequency operation make it an ideal candidate for applications in 5G/6G infrastructure, satellite communications, defense radar, electric vehicles, and power electronics. However, widespread commercial adoption of GaN faces significant barriers, including high production costs, supply chain constraints, and integration challenges within existing silicon-based fabrication processes.&#13;
&#13;
This thesis explores the opportunities and challenges associated with GaN-based integrated circuits (ICs) in the context of advanced wireless systems by utilizing Dr. Eugene Fitzgerald’s innovation framework – Technology, Markets, and Implementation (TMI). A comparative analysis of monolithic vs. board-level GaN integration is conducted. The research highlights that scaling GaN wafer production to approximately 10,000 wafers per year (200mm sized wafers) is necessary to achieve cost-effective monolithic integration, yet current defense-driven demand is insufficient to drive economies of scale. Instead, commercial applications—such as telecommunications, power electronics, and consumer RF devices—are target audiences that can take advantage of monolithic integration in high volume. &#13;
&#13;
The findings indicate that while defense applications have led non-monolithic GaN adoption (that is, discrete GaN transistor adoption), they cannot sustain large-scale production alone due to small volume. The semiconductor industry must navigate manufacturing bottlenecks, cost reduction strategies, and foundry availability to ensure GaN’s transition from a niche, high-cost technology to a commercially viable solution. By mapping the TMI intersections and addressing economic and technical barriers, this thesis provides strategic insights into how GaN technology can achieve scalable production, unlock new market opportunities, and shape the future of advanced wireless integrated circuits.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162529</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts</title>
<link>https://hdl.handle.net/1721.1/162528</link>
<description>Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts
Fontaine, Anouk
The AEC industry is responsible for 40% of global greenhouse gas emissions and 38% of EU waste, much of which is landfilled. The AEC waste represents an immense portion of resources that could be used instead of new materials. Many ongoing research projects have explored ways of reusing irregular components in construction, from whole steel trusses to single elements, triangulated subparts, or even irregular wood offcuts in order to mitigate the intensive recycling and deconstruction processes. However, the research has focused on general methodologies or one-off prototypes. This paper introduces a systematic approach to repurpose discarded steel and timber studs - components that make up to 10% of waste on local sites (Parigi, 2021) - into modular, steel-frame, load-bearing walls, providing a way to build new structures for the growing global demand for housing and infrastructure, while minimizing the creation of new emissions through the use of waste elements. Through a topdown and stock-constrained design approach, geometry optimization through a matching algorithm is combined with topology optimization to generate and evaluate various configurations to minimize new emissions and maximize structural efficiency. A human-scale prototype further assesses costs, architectural and structural flexibility, construction feasibility, robotic efficiency, and embodied emissions, offering a promising pathway for sustainable construction through effective waste reuse. For the available inventory, a human-scale prototype gives data on the workflow. This approach tackles the issues of existing waste stock with the growing demand for infrastructure and minimizes embodied emissions through structurally efficient resource use by pushing forward a systematic implementation of reuse in common construction practices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162528</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems</title>
<link>https://hdl.handle.net/1721.1/162527</link>
<description>Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems
Kumar, Prashant
Electricity is set to become the central pillar of both energy production and consumption in the global effort to achieve net-zero emissions. As key sectors—transportation, chemicals, and heavy industry—seek to decarbonize by electrifying their operations, industrialized nations face mounting strain on their electricity systems. This strain is further compounded by the rising demand for electricity driven by data centers and artificial intelligence applications, heralding a future of potentially unrelenting load growth.&#13;
In such a context, it becomes not merely prudent but essential to approach decisions regard- ing investment and operation in the electricity sector with analytical rigor. Advanced capacity expansion models provide the tools for this task. In this thesis, the GenX model is employed to study Taiwan’s electricity system—an islanded, industrially-intensive grid—evaluating the evolution of its capacity mix, generation profile, prices, emissions, and overall costs.&#13;
Our findings suggest that a reliable path to decarbonization lies in a considered combination of natural gas-fired generation with carbon capture and storage (CCUS), renewable sources such as solar and wind, and energy storage systems. Furthermore, this study finds that integration of nuclear and geothermal technologies significantly improves the cost-effectiveness of achieving decarbonization targets.&#13;
This thesis also addresses the “missing money” problem endemic to energy-only electricity markets, examining how the introduction of a capacity market influences both investment and operational outcomes. We find that the efficacy of capacity markets is highly sensitive to the design parameters of the demand curve and the capacity credit values of the resources. For islanded systems such as Taiwan’s, a pragmatic approach to ensuring security of supply may involve retaining existing natural gas infrastructure as a strategic reserve, paired with a capacity market design that avoids excessive conservatism, leveraging the absence of policy interactions and competition with neighboring electricity markets, as observed in Europe.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162527</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids</title>
<link>https://hdl.handle.net/1721.1/162526</link>
<description>Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids
Anastos, Daniel
One of the largest existential challenges the US and other countries face is climate change. And maybe no other system is more crucial to combatting climate change than the grid.  Increasingly more requirements have been put onto the transmission and distribution grids to play an even larger role than they have in the past; consider AI, EV, residential solar, electrification of heat, decarbonization of buildings, increasing energy rates, old infrastructure. Improving the grid is a necessity to decarbonize and innovate. However, utilities, backed up by state regulation, usually, but not always, use traditional techniques to expand grid capacity and increase resiliency as opposed to investing in modern grid technology that would more quickly allow for future innovations and decarbonization. These technologies, or techniques, are broadly called grid enhancing technologies, or GETs. There are rational reasons why GETs are not used more often. Utilities are correctly, highly risk averse because they must safely and adequately supply power directly to people. Utilizing new technologies, even if proven, can be a risk that utilities are unwilling, or not allowed, to take given their role and responsibility. But these risks are largely avoided with the technologies discussed in this paper and one could argue these technologies could not only make the grid cheaper to expand but also give the grid more resilience. This paper explores how a particular grid section can increase its solar penetration by avoiding traditional hosting capacity limitations and use not even innovative GETs but GETs that are largely tested and proven. Traditionally, at some limit, the utility will stop allowing solar in an area due to various grid constraints. This paper explores how a utility may solve these constraints using new methods to avoid large grid expansion CAPEX costs and utilize new technologies or techniques. Some of the techniques explored here are commercial scale energy storage support at substations, PV curtailment, and volt-var optimization control.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162526</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geothermal Energy Planning Considerations for Military Operational Energy Demands</title>
<link>https://hdl.handle.net/1721.1/162525</link>
<description>Geothermal Energy Planning Considerations for Military Operational Energy Demands
Seckfort, Cody L.
Contingency locations are temporary military bases that are often established in austere or contested environments. These locations rely heavily on diesel fuel for electrical power, which creates logistical vulnerabilities and increases the risk to personnel conducting fuel resupply missions. While the Department of Defense has made progress in adopting renewable energy technologies, many of these systems remain too large, inefficient, or underdeveloped for widespread use in operational environments. Geothermal energy presents a promising but underexplored alternative for generating reliable, on-site electrical power without the need for continuous fuel resupply.&#13;
This thesis evaluates the feasibility of geothermal energy systems for military operational energy demands and introduces a modified power planning process that incorporates geothermal considerations. The research focuses on closed-loop geothermal systems, utilizing an example system called the “Mil-Loop”, which is designed to minimize the system surface footprint and support remote installations. The planning process integrates existing geothermal tools, including GEOMAP/TEST for resource estimation and GEOPHIRES for system modeling and performance analysis. The Mil-Loop System Model incorporates each step of the planning process to produce a site-specific power system profile. &#13;
A case study using site-specific data from Bagram Airfield was used to assess the performance of a hybrid geothermal-diesel power system. The results suggest that geothermal system integration could reduce diesel fuel consumption by up to 42.9 percent over a 40-year site lifecycle. A sensitivity analysis indicates that geothermal system power output, drilling time, and installation costs are the most critical parameters affecting system viability. Advances in drilling technology and heat extraction have the potential to reduce installation costs and timelines, making geothermal more competitive with diesel generation. This thesis also identifies a gap in military energy planning resources, specifically the lack of frameworks that include geothermal options for operational environments. It recommends that the DoD begin integrating geothermal technologies into its energy planning strategies and develop modular systems that can be deployed in contested or resource-constrained areas. &#13;
While this research is limited by simplified power demand modeling and generalized tool assumptions, it offers a practical framework for evaluating geothermal viability in future defense applications. This study demonstrates that geothermal energy systems, particularly closed-loop configurations, can serve as a viable and strategically beneficial power source for military operations. When paired with targeted technology development and thoughtful integration into planning processes, geothermal systems can reduce logistical burdens, improve energy resilience, and enhance mission success in operational environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162525</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles</title>
<link>https://hdl.handle.net/1721.1/162524</link>
<description>Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles
Balla, Sai Prasad
This study provides a comprehensive techno-economic evaluation of a specific class of nuclear batteries—high-temperature gas-cooled 10 MW_th microreactors (HTGRs) with TRISO fuel in prismatic- and pebble-bed cores—using four composite moderator concepts (MgO–Be, MgO–BeO, MgO–YH, MgO–ZrH). These options are compared against a prismatic graphite benchmark, under both once-through and continuous-recycle fuel cycles.&#13;
&#13;
In once-through prismatic systems, hydride-based moderators can reduce overall fuel-cycle costs by up to about 20% relative to graphite, whereas beryllium-based moderators may remain 40–50% costlier due to higher raw material expenses. Shifting from prismatic blocks to pebble beds decreases moderator usage and increases burnup, thus making advanced moderator options more competitive. &#13;
&#13;
Adopting a continuous-recycle strategy replaces enrichment with reprocessing and can further lower fuel-cycle costs by roughly 30%. Coupling a sodium-cooled fast reactor (SFR) to supply transuranic’s further reduces the cost: SFR driver fabrication and reprocessing can account for the bulk of total costs, rendering microreactor-level variations comparatively minor. Meanwhile, pebble-bed designs propose ultra-high burnups and extended residence times, which could yield significant economic gains, contingent on demonstrated long-term TRISO fuel integrity.&#13;
&#13;
Waste handling also factors into the analysis. Deconsolidation—removing the inert moderator before disposal—can shrink spent-fuel volumes by more than 90%, easing repository demands. Continued R&amp;D into advanced additive manufacturing, high-burnup TRISO performance, and streamlined waste management will be crucial for capitalizing on these potential cost advantages.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162524</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation</title>
<link>https://hdl.handle.net/1721.1/162523</link>
<description>Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation
Bhatia, Jagdeep Singh
Robots with robust bimanual dexterity have the potential to transform industries such as manufacturing and healthcare by performing complex tasks at human-level proficiency. While end-to-end learning methods have shown promise in achieving this goal, scaling these approaches remains challenging. Existing paradigms suffer from high costs associated with collecting large-scale, high-quality demonstrations on physical systems and face performance saturation due to reliance on offline data. We propose a task-agnostic pipeline that leverages robotics simulation to overcome these limitations. In particular, we introduce DART, a cost-effective, augmented reality, robot teleoperation platform for scalable data collection. We demonstrate through user study that it enables twice the throughput of existing systems. We also present a learning algorithm that integrates real-world demonstrations with reinforcement learning to surpass performance plateaus. Finally, we design a method that zero-shot transfers policies trained in simulation on real robots using only RGB input. Together, these contributions provide a practical and scalable path toward achieving general-purpose dexterous robot manipulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162523</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image Registration and Gantry Tracking System of Clytia hemisphaerica</title>
<link>https://hdl.handle.net/1721.1/162522</link>
<description>Image Registration and Gantry Tracking System of Clytia hemisphaerica
Bunch, Bradley
Understanding nervous system function and evolution requires detailed behavioral analysis of model organisms such as the jellyfish Clytia hemisphaerica. However, its size and rapid, free-swimming nature pose significant tracking challenges. This work presents a platform for the XY gantry system developed to overcome these hurdles for high-resolution behavioral monitoring. Separately, to prepare for downstream neural analysis, we developed an automated neuron segmentation pipeline - tailored for image registration purposes. Together, the tracking system and the analysis preparation pipeline provide powerful, distinct tools for high-throughput behavioral quantification and facilitate future studies linking behavior to underlying neural dynamics in Clytia hemisphaerica.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162522</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Twin Technology Applied to Automotive Diagnostics</title>
<link>https://hdl.handle.net/1721.1/162521</link>
<description>Digital Twin Technology Applied to Automotive Diagnostics
Mwarage, Jessy Mbagara
There is currently a lot of interest in the area of Digital Twin (DT) Technology. Physical product oriented organizations are increasingly looking for ways to stay ahead of the technological innovation curve in order to not get disrupted by more agile entrants. Therefore, the promise of a technology like DT is alluring for the sake of maintaining a competitive edge. This thesis seeks to explore the potential benefits of DT technology alongside what challenges might be faced in implementing one. To this end, a problem statement is formulated in the field of automotive diagnostics. This is a key value addition field for automotive companies seeking to better manage the diagnosis and repair of their automobiles in the field or the manufacturing environment. The problem is further concretized with a study of some user-driven use cases and needs in a real automotive company. From these needs, a set of requirements is formulated to guide the architecture and design of a DT demonstration. The process of architecting and designing the DT is documented. This includes a deep dive on the modeling approaches considered, the solution space for the architecture and the detailed design and implementation of a DT demonstration from a selected architectural concept. The DT demonstration is then operated under controlled conditions in order to showcase some of its capabilities. Pursuant to all this, a reflection on the effectiveness of the demonstration and the lessons learned about the implementation process are discussed. The results of the study and demonstration show some promise for organizations seeking to adopt DT technology, in this particular case for automotive diagnostics. The benefits are mainly in terms of better system architecture  planning and the increased potential for better incorporating lessons learned from products operating in the field back into the design process. These benefits are weighed against the socio-technical challenges of implementing DTs from the outset of a system design exercise.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162521</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization</title>
<link>https://hdl.handle.net/1721.1/162520</link>
<description>Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization
Wang, Zach
This thesis presents a survey application designed for the future development of HumanInformed Topology Optimization (HiTop) towards the deeper integration of optimization and real-world feasibility. Topology optimization produces high-performance designs by optimally distributing material, but its application in professional environments remains limited due to fabrication constraints and inflexible design workflows. To address this, the Carstensen Group developed HiTop, which integrates optimization algorithms with human experience, allowing engineers to modify the computer design based on their professional judgment. Thus, the future development of HiTop requires real-world data on human preferences. This project introduces a web-based survey app integrated with Qualtrics. It presents users with various design scenarios and computer-optimized designs, and records their modifications and reasoning. A preliminary survey collected responses from 13 professionals and engineering students. Preliminary findings suggest that engineers consistently focus on similar regions of interest, even when motivated by different reasons. However, the sample size is too small to make any statistically significant conclusions. While the platform mostly performed as intended, a bug related to data storage was discovered during analysis. The issue has since been resolved, and the tool is now fully functional and ready for broader deployment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162520</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process</title>
<link>https://hdl.handle.net/1721.1/162519</link>
<description>Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process
Lauber, Emily
This research investigates the motivational drivers for companies and individuals to participate in the World Wide Web Consortium’s Web standards development process. Motivational drivers are identified through a literature review, primary sources, and interviews. Thirteen semi-structured interviews were conducted with questions related to participants’ experience with the World Wide Web broadly, Web standards in general, the organization of W3C, and game modeling of the process. W3C was selected as the case study of Web-related standards bodies because of its unique model of paid membership yet open standards available royalty-free. The W3C standards process requires consensus-building, horizontal review, and proof of implementation before the organization officially recommends the specification. Existing research documents the history and value of standardization across industries, the modeling of various Standards Development Organizations (SDOs) in information industries, and the negotiation of international Internet governance. This thesis does not attempt to prove a societal benefit of Web standards but instead focuses on an individual’s belief in societal benefit and how that belief drives their engagement with W3C.&#13;
&#13;
Initial findings point to members seeking economic, philosophic, and moral value through participation in Web standards development. A game theory framework evaluates the economic value of different players within the ecosystem and identifies that Web browser vendors and long-time consortium members have greater power for their preferred specification outcomes than Web developers or newcomers. Despite changes in the Web ecosystem in the past 30 years, W3C members continue to be drawn to the Web for the same philosophical intents that Sir Tim Berners-Lee designed the Web for. There are shared concerns though that the economic power players identified in the game modeling has damaged or will threaten the philosophy of an open, safe, accessible Web. Interviewees shared personal beliefs that there was a moral responsibility to engage in Web standards development and enable W3C’s mission of “empowering humanity”. Further research is required to catalogue more motivational drivers, evaluate drivers across other Web-related Standards Development Organizations, and rank the priority of motivations when the different drivers are in tension.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162519</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems</title>
<link>https://hdl.handle.net/1721.1/162518</link>
<description>Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems
Putnam, Rachael M.
Designing perception systems for autonomous robots and vehicles requires balancing sensor performance against cost, complexity, and integration constraints. This thesis introduces GO4R (Generation and Optimization of Perception System Architectures for Robotics), a multi-objective framework that jointly designs sensor selection, placement, against volumetric, entropy-based utility metric H (-) and monetary cost M ($). Perception Entropy H is formalized as a volumetric measure of uncertainty across a voxelized regions of interest (ROI), which naturally rewards coverage, overlap, and redundancy required for robust sensor fusion and calibration.&#13;
&#13;
NSGA-II is implemented with custom mixed-variable operators to specifically handle both continuous (e.g. sensor poses) and discrete (e.g. sensor type/count) decision variables found in this problem. Two case studies, long-range outdoor navigation on a Clearpath Jackal and short-range indoor navigation on ANYmal-C, demonstrate the framework’s ability to generate Pareto-optimal sensor architectures under vastly different ROI definitions and operating conditions. In the Jackal study, GO4R converges to a population of 11 novel Pareto-optimal designs, and revealing sensitivity to voxel size and importance weighting. In the ANYmal-C study, the compact, uniformly weighted ROI yields a flatter Pareto front with 25 Pareto-optimal designs, and underscores how intrinsic sensor parameters (e.g. angular resolution, and Field of View) dominate design trade-offs when baseline coverage is already high.&#13;
&#13;
Key architectural decisions are analyzed, quantified by their impact on Pareto front shape, and ordered according to the GO4R method to successively reduce uncertainty. The resulting guidelines provide practitioners with a rigorous, reusable process for tailoring perception systems to task-specific requirements. Finally, GO4R provides a publicly available NVIDIA Isaac Sim extension to aid practitioners in following the GO4R method, no matter their Autonomy application. Future work will extend GO4R to dynamic environments, improve fidelity of generated designs, and incorporate additional cost metrics such as computational load and maintainability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162518</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale</title>
<link>https://hdl.handle.net/1721.1/162517</link>
<description>Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale
Shao, Yu-Tong
Crop residues are a widely available form of agricultural waste with several possible reuse applications, including use as biofertilizers, animal feed, biofuels, and for carbon sequestration. However, in many parts of the world, large quantities of these residues are still burned in the field, releasing significant amounts of greenhouse gases (GHGs) and air pollutants to the atmosphere. This study aims to evaluate alternative and carbon-efficient strategies for reusing crop residues – especially focusing on rice straw and wheat straw – by conducting life cycle assessments (LCA) of multiple utilization pathways. Different alternative scenarios for utilizing crop residues are assessed: incorporating residue in field, animal usage for feeding, pyrolysis for electricity generation, pyrolysis for carbon sequestration, and electricity generation through residue combustion. Specifically, for the scenarios of pyrolysis and electricity generation through residue combustion, the maximum feasible transport distances of crop residues from agricultural fields to processing facilities are modeled for different logistics methods, providing information for the locations for establishing centralized facilities while maintaining GHG benefits for the scenarios. The results of this study highlight that electricity generation using crop residues, either through pyrolysis or direct residue combustion, offers the greatest climate benefits among all evaluated options. Carbon sequestration through pyrolysis also yields substantial GHG reductions, although slightly lower than the benefits from electricity generation. While crop residue-based electricity emits 4.35 to 31.25 times more GHGs per unit of electricity generated than renewable sources and 50.00 to 67.57 times more than nuclear sources, it still performs better than fossil fuels and provides added value in terms of agricultural waste management, resulting in 30.56 to 66.67% lower GHG emissions. Moreover, transportation emissions account for only a small share of the total life cycle global warming potential (GWP) in the electricity generation scenarios, ranging from 0.66% (via ships) to 16.40% (via trucks) for every 1000 km traveled. This makes long-distance residue transport viable. The findings of this study underscore the potential for crop residues to play a meaningful role in climate mitigation and sustainable agricultural waste management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162517</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic Roadmapping and Technology Portfolio Selection for Heating Decarbonization in Canada</title>
<link>https://hdl.handle.net/1721.1/162516</link>
<description>Strategic Roadmapping and Technology Portfolio Selection for Heating Decarbonization in Canada
Shalash, Karim
Heating systems contribute significantly to Canada’s greenhouse gas emissions, accounting for approximately 117 megatons of CO₂ equivalent. demanding urgent decarbonization to meet national climate targets. This thesis employs the Advanced Technology Roadmap Architecture framework, integrating strategic roadmapping and technology portfolio selection methodologies to evaluate pathways for transitioning Canada’s heating sector to net-zero emissions by 2050. By analyzing historical emissions, forecasting adoption trends for key technologies like heat pumps, and conducting stakeholder-driven scenario analysis, this research identifies critical barriers to scaling low-carbon solutions, including high upfront costs, infrastructural limitations, and regional climatic constraints. &#13;
Seven representative heating architectures—air-source heat pumps, ground-source heat pumps, district heating, hydrogen-based systems, electric resistive heating, and conventional gas-fired furnaces—are evaluated comprehensively. Among these, district heating is particularly emphasized due to its potential for significant emissions reductions and minimal consumer-bearing initial cost of ownership, especially when strategically integrated with waste heat recovery from data centers. This integration utilizes otherwise wasted thermal energy, creating a robust symbiotic opportunity for urban and industrial decarbonization. &#13;
To support the practical deployment of these architectures, the thesis establishes a targeted technology portfolio comprising essential enabling and supporting technologies. Enabling technologies include centralized supervisory control systems, urban-scale district heating networks, inverter-driven compressors, advanced refrigerants, ground heat exchangers, and circulation pumps with variable frequency drives. Critical supporting technologies identified encompass building information modeling integration kits, cybersecurity modules, digital permitting platforms, smart thermostats, and thermal energy storage systems, among others. &#13;
This thesis further explores technology trade-offs, focusing on structural complexity, technology readiness, and associated risks of deployment. Through detailed modeling and stakeholder-informed scenario analysis, the thesis concludes that effective decarbonization of heating in Canada necessitates substantial policy interventions, robust financial incentives, targeted infrastructure investments, and region-specific strategies. The analysis indicates that a carefully allocated $8 billion catalyst investment could close approximately 60% of Canada’s heating emissions gap by 2050. Ultimately, district heating coupled with waste heat recovery emerges as a particularly promising strategic option, underscoring its transformative potential within a diversified approach to achieving Canada’s sustainable heating future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162516</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proximity and Prenatal Care: Geographic Accessibility to Healthcare Facilities in N’Djamena, Chad</title>
<link>https://hdl.handle.net/1721.1/162515</link>
<description>Proximity and Prenatal Care: Geographic Accessibility to Healthcare Facilities in N’Djamena, Chad
Alkhalil, Kabbod
Access to prenatal care is critical for reducing maternal and neonatal mortality rates. Yet, accessibility to healthcare facilities remains an understudied challenge in many sub-Saharan African countries. This study examines the spatial accessibility to healthcare facilities in N’Djamena, Chad, across various transportation modes, as well as the relationship between travel time and adherence to WHO-recommended prenatal care visits.&#13;
This analysis utilized a mixed-methods approach. A geospatial analysis was conducted to estimate travel times and distances to the nearest healthcare facility across the city of N’Djamena using various transportation modes to uncover areas of low accessibility. This analysis was supplemented with survey data collected from interviews with 67 pregnant women across three different hospitals.&#13;
Findings show that 72% of the surveyed population use motorcycles or cars and benefit from high accessibility. 95% of these patients have travel times under 26 and 30 minutes, respectively. In contrast, pedestrians have poor accessibility, especially when patients only attend district or national hospitals. This behavior is very likely – 81% of the surveyed population reported bypassing closer facilities, advancing familiarity and quality of care as the main reasons. In this instance, 20% of the population have travel times greater than one hour on foot. &#13;
While adherence to WHO guidelines was high in early pregnancy (below 20 weeks), it declined in later stages. The study found no statistically significant correlation between travel time and adherence.&#13;
Improving accessibility for pedestrians will require a combination of health system improvements, better facility distribution, and transport subsidies. The Ministry of Public Health and urban planners can employ similar data-driven approaches to plan the placement of new healthcare facilities and develop outreach strategies to ensure equitable access in a growing urban context.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162515</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proof-of-Work Mitigation Strategy for DNS-Based&#13;
Amplification Attacks</title>
<link>https://hdl.handle.net/1721.1/162514</link>
<description>Proof-of-Work Mitigation Strategy for DNS-Based&#13;
Amplification Attacks
Bansal, Umang
Distributed Denial of Service attacks, and particularly DNS Amplification attacks, have seen a steady rise in deployment over the past few decades. DNS Amplification attacks, in particular, are challenging to identify and mitigate because of their apparent similarity to legitimate DNS traffic. This thesis proposes a new Proof-of-Work mitigation strategy that provides a defense against DNS Amplification attacks and shifts the burden of mitigation to the attackers. Through our experiments, we show that our Proof-of-Work strategy is effective in reversing the impact of DNS Amplification attacks on the victim’s ability to service legitimate clients. We also provide an evaluation framework to evaluate the mitigation strategy’s impact on the victim’s quality of service.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162514</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Human-Informed Variables in Medical Data</title>
<link>https://hdl.handle.net/1721.1/162513</link>
<description>Modeling Human-Informed Variables in Medical Data
Abu Daoud, George
In the Age of Information and Artificial Intelligence, data plays a major role in analyzing and understanding underlying trends and patterns as well as informing processes and operations. Medical data often captures information beyond mere patient conditions and state, but also human behavioral aspects of the medical process, affecting the data itself and the decisions informed by it. Modeling these variables could help us understand how they influence decisions in the field and potentially augment our models for better and more nuanced predictions. In the first study, we look into how external non-medical factors might affect decision-making by investigating the effect of 30-day mortality metrics on discharge rates following surgeries in Cardio-Vascular Intensive Care Units (CVICU) using data from the MIMIC-IV dataset. In the second study, we examine data extraction from human-notes for enhancing organ procurement decision processes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162513</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soil Moisture Dynamics and Thresholds for Surface Energy Balance Regime Transitions: An Observational Analysis at a U.S. Grassland Site</title>
<link>https://hdl.handle.net/1721.1/162512</link>
<description>Soil Moisture Dynamics and Thresholds for Surface Energy Balance Regime Transitions: An Observational Analysis at a U.S. Grassland Site
Verensia, Ria
Understanding how soil moisture declines following rainfall—when the soil progressively dries due to evaporation and plant uptake—is critical for assessing plant water stress, surface energy partitioning, and land–atmosphere interactions. These periods of moisture loss, commonly referred to as soil moisture drydowns, provide a valuable window into the transition from wet to dry surface conditions. This study focuses on the critical soil moisture threshold (θ*), which marks the transition from energy- water-limited surface evaporation regimes. This transition reflects a key shift in surface energy balance and controls the extent to which evaporation is constrained by moisture availability. While previous research has typically treated θ* as a static value based on soil texture, emerging evidence suggests that it may vary depending on environmental conditions, particularly seasonal climate. This study investigates whether θ* is a fixed property or a dynamic threshold influenced by seasonal variation and available energy. Using in situ data from the Soil Temperature and Moisture Profile (STAMP) system and Infrared Thermometer (IRT) measurements at a semi-arid grassland site in Oklahoma, USA, I identify and analyze soil moisture drydown events. I estimate θ* by applying piecewise linear regression to the relationship between soil moisture and diurnal surface temperature range, isolating the breakpoint that indicates the transition from energy-limited to water-limited evaporation. Results reveal that θ* exhibits systematic temporal variations, particularly across seasons and temperature regimes, suggesting that surface temperature dynamics during drydowns are most likely a response to changes in soil moisture content. These findings challenge the assumption that θ* is solely texture-dependent and highlight the need to account for dynamic environmental controls in modeling surface energy exchange. This research provides new insights into soil moisture-temperature coupling and offers implications for land surface model development, drought forecasting, and vegetation response assessments under a changing climate.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162512</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combustion Physics and Inverse Modeling of Energetic Materials</title>
<link>https://hdl.handle.net/1721.1/162511</link>
<description>Combustion Physics and Inverse Modeling of Energetic Materials
Kim, Suyong
Energetic material combustion involves intricate multi-scale and multi-phase dynamics, where the interplay of chemical reaction and transport processes results in complex wave patterns across a wide range of length scales from nano to millimeter. Our limited fundamental understanding of these combustion processes poses a challenge to design and optimize combustion properties, leading to significant reliance on empirical knowledge. Deeper comprehension can be achieved by linking fundamental aspects of reaction and transport to combustion dynamics. However, there are very limited diagnostic tools available to quantify material properties and chemical kinetics for heterogeneous materials under combustion, which hinders the quantitative analysis of combustion waves. Furthermore, combustion wave dynamics and flame structures in modern nanocomposite energetic materials have not been fully resolved. This lack of breath in modeling techniques and experimental characterization has prevented quantitative analysis of combustion wave dynamics for energetic materials.&#13;
This thesis aims to establish theories for combustion waves in energetic materials by correlating their intrinsic chemical reaction and transport properties with wave dynamics. To achieve this goal, two major steps are involved. First, we propose a novel inverse modeling approach to infer material properties and chemical kinetics using PDE-constrained optimization, which allows for deciphering the reaction-transport coupling from observable dynamics in currently available combustion diagnostic tools. We further discuss training challenges of neural differential equations with data subject to scale separation and propose mitigation strategies that enable learning stiff dynamical systems. Secondly, we investigate flame structures and dynamics in nanocomposite energetic materials at length scale ranging from micron to sub-millimeter using high-speed microscopic imaging techniques. Two distinct combustion wave patterns are characterized by flame dynamics and stability. Based on inverse modeling and microscopic observation, we finally construct two theories of combustion wave propagation and wave stability, by performing scaling analysis on wave dynamics in terms of mass and thermal transports and chemical reaction. A systematic view of energetic material combustion allows for deeper comprehension of how multi-scale dynamics of reaction and transports evolve in macro-scale combustion waves, potentially leading to the development of predictive models for the intricate heterogeneous combustion dynamics of energetic materials.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162511</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for small-molecule transparent semiconductors</title>
<link>https://hdl.handle.net/1721.1/162510</link>
<description>Computational methods for small-molecule transparent semiconductors
Carter, Ki-Jana
Solar energy has enormous potential to meet global energy demand in a renewable and environmentally sustainable manner. Although silicon-based photovoltaic (PV) devices have become significantly more affordable and accessible in recent decades, there is a need to develop alternative PV technologies which can be deployed more widely and cheaply. Visibly transparent PV devices based on organic semiconductors are well-suited to this role due to their ability to be installed on windows and building facades, their mechanical flexibility, and their high degree of tunability. However, in order for transparent PV to become commercially viable, further research is needed to shed light on the systematic tuning of the optical properties of molecular materials with visible transparency. This work applies computational tools — namely density functional theory (DFT) and graph neural networks — to gain a deeper understanding of how molecular structure impacts macroscopic optical properties and suggest directions for future study.&#13;
&#13;
In this work we employ linear-response time-dependent DFT with optimally tuned and screened range-separated hybrid functionals in order to compute accurate photoabsorption spectra with relatively low computational cost. Additionally, we utilize molecular graph neural networks as a means to leverage quantum mechanical datasets to accelerate the materials discovery process. These methods are combined to make progress on the optical design of organic semiconductors. &#13;
&#13;
This thesis document is organized as follows. Chapter 1 introduces transparent photovoltaics and the associated materials design considerations. Chapter 2 summarizes the computational methods employed in this work. Chapter 3 describes the first-principles modeling of small-molecule transparent absorbers using perylene dimide derivatives as a case study. Chapter 4 studies principles underlying the design of molecular graph neural networks. Chapter 5 applies these modeling techniques to construct a spectral dataset and train a scalable spectral model; screen a large dataset of organic molecules; identify physical trends and structure-property relationships; and suggest promising candidate materials for transparent photovoltaic applications.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162510</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>MOBLLM: Model Building LLMs via Symbolic Regression and Experimental Design</title>
<link>https://hdl.handle.net/1721.1/162509</link>
<description>MOBLLM: Model Building LLMs via Symbolic Regression and Experimental Design
Binbas, Berkin
Large language models (LLMs) have recently emerged for daily use and have already been extensively utilized for various tasks. They are shown to be able to carry out more and more complex tasks every day, including those that require a high level of formal/mathematical reasoning at human or superhuman levels. In particular, their in-context learning capabilities and the domain-specific knowledge they have via their vast pretraining corpus, as well as their fine-tunability for specific tasks drove a lot of attention and research in the field. However, applications of LLMs to the frontiers of scientific research remains an underexplored direction. In this work, we investigate how one can leverage LLMs to aid with building compact mathematical models and experimental design. Specifically, we propose a framework for using LLMs as a guide to concurrently handle the experimental design and symbolic regression tasks for data obtained from 1) a black box 1D function and 2) a black box physical system. We propose further modifications to our base framework, and perform experiments to analyze how it performs under different experiment variants, across different LLM tiers. Our experiments reveal that while larger models (of around 70b parameters) do not always achieve better downstream performance compared to smaller models (of around 8b parameters), they are able to utilize the given information and/or physical context when designing experiments and proposing symbolic expressions, and perform better than random-design baselines. We also observe that natural language constraints do not consistently improve symbolic regression accuracy. These results underscore both the challenges and the potential of integrating LLM agents into the scientific discovery process, particularly as proposers of experiments and symbolic expressions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162509</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>I-Con: A Unifying Framework for Representation Learning</title>
<link>https://hdl.handle.net/1721.1/162508</link>
<description>I-Con: A Unifying Framework for Representation Learning
Alshammari, Shaden
As the field of representation learning grows, there has been a proliferation of different loss functions to solve different classes of problems. We introduce a single information-theoretic equation that generalizes a large collection of modern loss functions in machine learning. In particular, we introduce a framework that shows that several broad classes of machine learning methods are precisely minimizing an integrated KL divergence between two conditional distributions: the supervisory and learned representations. This viewpoint exposes a hidden information geometry underlying clustering, spectral methods, dimensionality reduction, contrastive learning, and supervised learning. This framework enables the development of new loss functions by combining successful techniques from across the literature. We not only present a wide array of proofs, connecting over 23 different approaches, but we also leverage these theoretical results to create state-of-the-art unsupervised image classifiers that achieve a +8% improvement over the prior state-of-the-art on unsupervised classification on ImageNet-1K. We also demonstrate that I-Con can be used to derive principled debiasing methods which improve contrastive representation learners.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162508</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System thinking to analyze the Market penetration of Two-Wheeled vs Four-Wheeled EVs in India</title>
<link>https://hdl.handle.net/1721.1/162507</link>
<description>System thinking to analyze the Market penetration of Two-Wheeled vs Four-Wheeled EVs in India
Kumbhare, Piyush
This thesis analyzes the disparate market penetration rates of electric two-wheelers (E2Ws) and electric four-wheelers (E4Ws) in India, using systems thinking approaches to understand the underlying dynamics and propose strategic interventions. In 2024, while E2Ws have achieved 4.43% market penetration, E4Ws lag significantly at 1.91%, despite similar policy support. Through force field analysis and stakeholder value mapping, this research identifies key factors driving this disparity and evaluates their temporal evolution over three time horizons.&#13;
The analysis reveals that E2Ws benefit from stronger driving forces, including urban suitability, favorable total cost of ownership, and simpler charging solutions, with 91% of users relying on home charging. In contrast, E4Ws face more substantial barriers, particularly in upfront costs, charging infrastructure requirements, and range anxiety. Technical modeling of key Figures of Merit (FOMs) demonstrates how different optimization challenges affect each segment's market acceptance.&#13;
The research culminates in recommendations for accelerating E4W adoption, emphasizing the need for India-specific models priced similar to internal combustion engine (ICE) vehicle, localized manufacturing ecosystems, robust charging infrastructure, and innovative financing solutions. The findings suggest that while E2W adoption will continue to grow naturally, E4W penetration requires coordinated interventions across manufacturing, technology, infrastructure, policy, and consumer awareness dimensions. This research contributes to understanding how systems thinking can inform strategic planning for electric vehicle adoption in emerging markets, with specific implications for India's goal of 30% EV penetration by 2030.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162507</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience</title>
<link>https://hdl.handle.net/1721.1/162506</link>
<description>Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience
Dao, Nguyen Luc
Large Language Models (LLMs) have been increasingly adopted by businesses to support their workflows, driving significant investment in developing generative agents. These agents can collaborate and exchange information to solve complex problems. Previous research has found that the benefits of such multi-agent systems include better performance and the potential emergence of collective intelligence characterized functionally as leadership, debate, and feedback. However, expanding multi-agent systems to include agents beyond trusted boundaries introduces the risks of malicious agents that provide incorrect or harmful information to deteriorate collective decisions or cause systemic failure. This study investigates how architectural decisions, including group size, agent prompting, and collaboration schemes, impact the system's resilience against malicious agents. Our experiment results show that increasing group size improves both accuracy and resilience at the cost of more tokens. Step-back abstraction prompting enhances accuracy and mitigates the likelihood of hallucinations induced by malicious agents. Group Chat topology is highly vulnerable to malicious interferences. Reflexion, Crowdsourcing, and Blackboard topologies offer safeguards against such risks. Eventually, we expand our research to investigate accountability gaps in generative AI systems. Designing generative multi-agent systems requires careful consideration of the trade-offs between performance, cost, resilience, and accountability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162506</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>National Space Power Analysis Through Organizational and Market Evolution</title>
<link>https://hdl.handle.net/1721.1/162505</link>
<description>National Space Power Analysis Through Organizational and Market Evolution
Deline, Carrie B.
The space domain is undergoing fundamental changes and unprecedented growth. Once dominated by state-led missions, the space sector is now home to commercial competition, rapid innovation, and evolving models of public-private collaboration. These changes call to question how space power is built and maintained, especially during the raising geopolitical climate and power competition in space. The rise of agile commercial industry has driven down launch costs, accelerated technology development and opened new markets and business cases forcing legacy institutions to re-evaluate their strategies and business models.&#13;
&#13;
This thesis is motivated by the need to understand how organizations are responding to these changes, and how their choices collectively shape the United States as a national space power. Through the application of a theoretical space power model based on war strategy and Schumpeterian innovation theory, the different elements of space power will be explored in today’s context. It seeks to identify the organizational drivers of change, tensions and synergies between legacy enterprises and new entrants, and the implications of the dynamic space ecosystem.&#13;
&#13;
This thesis includes a mixed-methods analysis starting with a historical understanding of the evolution of the sector. By identifying current market trends, government policies and initiatives, the applied theoretical model is presented. The model is supported by market data, a force field analysis of organizational shifts, and qualitative interview insights from industry leaders. The research aims to contribute insights for government strategists and industry leaders concerned with America’s future as a space power and their organization’s role within it.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162505</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noisy with a Chance of Mislabels: A Local and Training Dynamics Perspective on Detecting Label Noise in Deep Classification</title>
<link>https://hdl.handle.net/1721.1/162504</link>
<description>Noisy with a Chance of Mislabels: A Local and Training Dynamics Perspective on Detecting Label Noise in Deep Classification
Chentouf, A. Anas
Noisy labels are a pervasive challenge in modern supervised learning, especially in highstakes domains such as healthcare, where model reliability is critical. Detecting and mitigating the influence of mislabeled data is essential to improving both performance and interpretability. Building on insights from training dynamics, we propose Local Consistency across Training Epochs (LoCaTE), a class of data-filtering methods that leverages over-parameterized and over-trained neural networks to distinguish clean samples from mislabeled ones. Our approach integrates both local neighborhood information and the behavior of samples across training epochs to identify noise and enhance model robustness. We evaluate our method on real (human) and synthetic label noise across three classification datasets, finding that it achieves competitive F₁ of label error detection and improved downstream accuracy using a lightweight classifier with low added computational cost.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162504</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Multimodal Interactions through Improved Partial Information Decomposition Estimation</title>
<link>https://hdl.handle.net/1721.1/162503</link>
<description>Analyzing Multimodal Interactions through Improved Partial Information Decomposition Estimation
Balachandran, Adithya S.
Multimodal AI aims to build comprehensive models by integrating information from diverse sensory inputs such as text, audio, and vision. However, significant challenges remain in understanding how these different modalities interact and contribute to downstream tasks. In particular, we seek to characterize how modalities complement each other, overlap in the information they convey, or contribute jointly to patterns that are not clear from any single modality alone. To address this, we propose novel methods for quantifying these multimodal interactions using information-theoretic techniques. Specifically, we will introduce a novel estimator for Partial Information Decomposition (PID) using normalizing flows, with the ability to scale well to high-dimensional data. We also develop a new framework for estimating pointwise PID, which provides insights into how individual data points contribute to information sharing and interactions across modalities, and show how to apply this framework for anomaly detection. We demonstrate the effectiveness of our methods on a variety of high-dimensional datasets, including both synthetic and real-world multimodal data such as videos.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162503</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explanation Alignment: Quantifying the Correctness of&#13;
Model Reasoning At Scale</title>
<link>https://hdl.handle.net/1721.1/162502</link>
<description>Explanation Alignment: Quantifying the Correctness of&#13;
Model Reasoning At Scale
Bang, Hyemin
To improve the reliability of machine learning models, researchers have developed metrics to measure the alignment between model saliency and human explanations. Thus far, however, these saliency-based alignment metrics have been used to conduct descriptive analyses and instance-level evaluations of models and saliency methods. To enable evaluative and comparative assessments of model alignment, we extend these metrics to compute explanation alignment — the aggregate agreement between model and human explanations. To compute explanation alignment, we aggregate saliency-based alignment metrics over many model decisions and report the result as a performance metric that quantifies how often model decisions are made for the right reasons. Through experiments on nearly 200 image classification models, multiple saliency methods, and MNIST, CelebA, and ImageNet tasks, we find that explanation alignment automatically identifies spurious correlations, such as model bias, and uncovers behavioral differences between nearly identical models. Further, we characterize the relationship between explanation alignment and model performance, evaluating the factors that impact explanation alignment and how to interpret its results in-practice.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162502</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Octavio: A Distributed System for the Sensing, Storing,&#13;
and Retrieval of Piano Playing Data</title>
<link>https://hdl.handle.net/1721.1/162501</link>
<description>Octavio: A Distributed System for the Sensing, Storing,&#13;
and Retrieval of Piano Playing Data
Abdulrezak, Ayyub
MIT has a wealth of pianos spread across its campus. These instruments are owned by various groups and MIT organizations. Every day, students, faculty, and extended members of the MIT community play and practice with them. However, there currently exists no available data on their usage. This project aims to create the infrastructure for capturing this data. To this end, we installed sensing equipment on pianos across campus, constructed a matching database and filesystem of all playing sessions across time, and established a public API for the retrieval of this data. The collected data will later be used to power a publicly accessible webpage of real-time and historical visualizations, as well as serve to bolster research efforts into the characteristic piano playing of MIT.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162501</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topology optimization of buildings-scale structures with&#13;
material and fabrication constraints</title>
<link>https://hdl.handle.net/1721.1/162452</link>
<description>Topology optimization of buildings-scale structures with&#13;
material and fabrication constraints
Jewett, Jackson L.
The construction industry releases about 10% of anthropogenic Carbon Dioxide every year, primarily due to the manufacturing of construction materials. Structural optimization has been proposed as means of improving material efficiency in buildings, and thus reducing material demand for construction projects. Topology optimization has great potential for materially-efficient design because it is a free-form optimization method, allowing for performant geometries to be computationally derived with minimal input from the user. However, topology optimization algorithms must be modified to account for the specific fabrication and material constraints that are inherent in construction practices. This thesis shares a collection of research projects related to the use of topology optimization for large-scale structures relevant to the construction industry. First, a novel algorithm is proposed for large-scale 3D printed structures. The work focuses on the limitations presented by the printing nozzle, and the anisotropies that arise in 3D printed systems. Second, topology optimization is modified for design of structural glass. Several algorithms are developed, which are then used to design, fabricate, and test physical specimen to evaluate their real-world performance. Third, a framework is presented to design low-weight reinforced concrete structures. This system is used to design, build, and test reinforced concrete beams, so their performance can be compared to conventionally designed specimen. This thesis considers the diverse ways that topology optimization could be applied to design large-scale structures of various construction materials. The results demonstrate the types of computational techniques that can be used for generative design in the built environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162452</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a 3D Printer for Oxide-free Aluminum Transportation</title>
<link>https://hdl.handle.net/1721.1/162451</link>
<description>Development of a 3D Printer for Oxide-free Aluminum Transportation
Smith, Henry R.
Hydrogen as an energy carrier is abundant, has a high caloric value and only produces water when combusted. A challenge is directly transporting hydrogen incurs high costs due to its low density. The low flammability limit also makes it dangerous to transport. Aluminum has been proposed as an alternative energy carrier for its high density and ability to be stored at ambient conditions, allowing for cheaper transportation. Hydrogen can be produced on-site by reacting the aluminum fuel with water. However, when exposed to air aluminum forms an inert oxide layer on its surface, preventing reaction. High reaction temperatures are required to overcome the oxide layer leading to a high energy penalty. &#13;
&#13;
This thesis proposes a novel concept of aluminum encapsulation with water-soluble polymer. A 3d printer was designed and fabricated which creates aluminum-polymer structures that do not oxidize during storage and can achieve a wide range of reaction rates with water by varying the structure surface area. This new approach provides several benefits, by removing the oxide layer before the reaction happens, the aluminum is in an “activated” state and can react at room temperature. This reduces the energy required for reaction. Additionally, by having control over the reaction rate, ideal production rates can be achieved, reducing waste products and meeting consumption demands. The unique manufacturing flexibility of 3d printers enables the fabrication of structures with wide ranges of surface area to volume ratios. By shipping activated aluminum in the polymer structures, hydrogen can be produced locally and the need for expensive hydrogen transport can be eliminated.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162451</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Development of an Atmospheric Dispersion&#13;
Compensator for the LLAMAS instrument on the&#13;
6.5m Baade Magellan Telescope</title>
<link>https://hdl.handle.net/1721.1/162450</link>
<description>Design and Development of an Atmospheric Dispersion&#13;
Compensator for the LLAMAS instrument on the&#13;
6.5m Baade Magellan Telescope
Berlanga Molina, Gerardo A.
Atmospheric Dispersion is a phenomenon caused by the wavelength dependent refraction of incoming light by Earth’s atmosphere. For non-perpendicular angle of incidence, higher energy light (shorter wavelength) such as blue and violet are refracted more than its lower energy counterparts, for example, red light. As such, when a telescope is pointed at a non-zero zenith angle, there is a varying vertical angular separation between the different wavelengths of incoming light. In order to mitigate this separation and improve the spectral response of a scientific instrument, an Atmospheric Dispersion Compensator (ADC) is employed. At its simplest, this is a device consisting of two zero-deviation prisms that counter-rotate to counteract the unwanted atmospheric dispersion. At zenith, their dispersion axes cancel each other out, and near the horizon, their axes are parallel such that their net dispersion is opposite to that of the atmosphere. Here, a novel otpo-mechanical realization of an Atmospheric Dispersion Compensator is explored involving two hollow stepper motors with the appropriate diameters to house near-athermally RTV bonded powered optic lenses in order to meet dimensional constraints that prevented a more conventional ADC design. Using Hall-effect sensors, the ADC is able to reliably home without the need of motor encoders for positioning. Upon the ADC’s installation on the Large Lenslet Array MAgellan Spectrograph (LLAMAS) and the latter’s subsequent successful commissioning on the 6.5m Baade Magellan Telescope, the ADC has been operating and helping astronomers every night.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162450</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Policy and Technical Recommendations for Integrating&#13;
Autonomy into Military Offensive Cyberspace Operations</title>
<link>https://hdl.handle.net/1721.1/162449</link>
<description>Policy and Technical Recommendations for Integrating&#13;
Autonomy into Military Offensive Cyberspace Operations
Wettstein, Benjamin
As AI technologies and autonomy grow, the transition to application in the military, specifically enhancing military cyberspace operations, has become both a strategic imperative and an adoption challenge. This thesis explores the challenge of effectively integrating autonomous cyber weapons systems into offensive military cyberspace operations. I offer both technical and policy recommendations to ensure autonomous technology development does not outpace its ability to be integrated. &#13;
&#13;
This thesis analyzes historical case studies, such as loitering munitions and escort jammers, to examine the potential for integrating autonomous cyber weapons systems into military offensive cyberspace operations. This analysis finds that the more autonomous and lethal a weapon is, the more difficult it is to integrate it into military operations.&#13;
&#13;
Subsequently, the current state of cyberspace operations is analyzed by discussing two cyberspace attacks, Stuxnet and Conficker. This analysis reveals that cyberspace operations currently demonstrate low to medium levels of autonomy and low levels of lethality. Therefore, there is a significant opportunity to adopt autonomous systems in the current context of offensive cyberspace operations. However, as the domain of cyberspace is transforming with the growth of complexity in technology, there are evolving legal, ethical, bureaucratic, and technical concerns. This thesis contains policy recommendations around technical standards, investment and acquisitions, and regulations regarding using autonomous cyber capabilities to address these challenges. Along with the policy recommendations, the core technical recommendation that enables autonomous cyber systems is the safe and effective deployment of human-machine interfaces to direct and control them. This thesis argues that interfaces are not merely supporting tools but are, in fact, the central technical mechanism for enabling traceability, oversight, and control in autonomous cyberspace operations. The future development and integration of autonomous cyber systems must&#13;
prioritize interface design tailored to varying degrees of autonomy and operator control.&#13;
&#13;
The technical portion of this thesis explores different interfaces for autonomous cyber systems, utilizing distinct models of autonomy within the Cyber Operations Research Gym (CybORG) simulation environment. Each interface corresponds to the three human-machine relationships discussed, which include a semi-autonomous interface (human in the loop), a supervised autonomous interface (human on the loop), and a fully autonomous interface (human out of the loop). These interfaces serve as a proof of concept, providing evidence that different levels of autonomy can be implemented on the same autonomous cyber system. Additionally, the use of LLMs to explain the actions taken by autonomous cyber systems is explored.&#13;
&#13;
Ultimately, this thesis contributes technical and policy recommendations for navigating the future of autonomous cyber warfare. As autonomous systems evolve in sophistication and capability, the U.S. military must adopt policy and technical mechanisms that enable autonomy without sacrificing oversight, accountability, or effectiveness.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162449</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Response of Arabidopsis to bacterial presence under iron stress</title>
<link>https://hdl.handle.net/1721.1/162448</link>
<description>Response of Arabidopsis to bacterial presence under iron stress
Kitzinger, Katherine A.
Iron availability is essential for the normal function of plants, but it becomes less available for uptake under drought. A lack of iron can lead to early senescence, fewer and less nutritious crops, and in extreme cases, plant death. In response to these stressful conditions, microbial interactions can lead to improved plant health, however the mechanism by which this occurs is not understood. In this study we cocultured an Arabidopsis MTP8 knockout line, which is susceptible to iron stress, as well as a subset of a previously established synthetic microbial community derived from healthy Arabidopsis roots. We cocultured the Arabidopsis lines and bacteria under three different iron levels in a hydroponics system and measured the dry weight and chlorophyll content ten days post inoculation. This study aims to narrow the potential mechanism of the beneficial effects of bacteria on plants experiencing nutrient stress.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162448</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Topology Optimization with Hybrid Truss and Continuum Elements Types</title>
<link>https://hdl.handle.net/1721.1/162447</link>
<description>Interactive Topology Optimization with Hybrid Truss and Continuum Elements Types
Zhang, Eileen
Topology optimization is a rising tool in structural design that can improve material efficiency and promote sustainability. However, currently topology optimization is not greatly used in the industry due to the nature of its user-unfriendly, high computation cost and difficulty in manufacturability. This thesis proposes a new framework combining traditional discrete topology optimization with truss elements and continuum elements topology optimization in creating a more informed algorithm suitable for more practical design scenarios. In addition, the drawing toolkit is also introduced in helping users better interact with the system in outputting their desired outcome. The hybrid element type topology optimization is achieved by creating separate local stiffness matrices and mapping them respectively to the same global design space to perform optimization together. The interactive drawing functions are used as add-in truss members that users can select the amount and draw in the length and locations of them in the design space. This framework is tested on multiple topology optimization classic problems including cantilever beam with bracings and MBB beam. All draw-in truss hybrid topology optimized results show a more efficient design results with lower compliance and overall lower material quantity.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162447</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>G-Code Based Toolpath Simulation for Predicting CNC Energy Consumption</title>
<link>https://hdl.handle.net/1721.1/162446</link>
<description>G-Code Based Toolpath Simulation for Predicting CNC Energy Consumption
Anziani, Jonathan
Machining is an energy intensive process, and being able to model the energy consumption of machining would allow manufacturers to consider how to reduce their energy footprint. While many models have been developed for estimating energy consumption, they are not easily applicable or accessible to CNC machining, where the material removal rate is variable. This thesis develops a G-code based simulation that uses a voxel mesh to virtually recreate material removal, approximating the material removal rate at discretized points in the machining process. Using an energy consumption model and machine power data, material removal rates are related to the power consumption of machining the part. The simulation pipeline was validated using power data collected from literature, and for a constant material removal rate the model has shown average absolute error of 3.17% predicting power and 2.89% predicting specific energy consumption for simulated test geometries.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162446</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Inconsistent Results of Table Transformer for Improved&#13;
Data Extraction in Childhood Obesity Intervention Literature</title>
<link>https://hdl.handle.net/1721.1/162445</link>
<description>Analyzing Inconsistent Results of Table Transformer for Improved&#13;
Data Extraction in Childhood Obesity Intervention Literature
Neupane, Pragya
Tables in scientific literature are rich sources of structured data, yet their complex and variable formats pose challenges for automated extraction. This thesis focuses on improving the reliability of Table Structure Recognition (TSR) using the Table Transformer (TATR) model, with a specific application to childhood obesity intervention studies. While fine-tuning TATR on a domain-specific dataset improves detection metrics, persistent errors such as overlapping rows and misclassified header columns remain. Through a systematic post-hoc error analysis of 175 scientific tables, we identify these dominant failure modes and develop lightweight post-processing modules: an overlap-aware row filtering algorithm and an OCR-enhanced column boundary correction method. Importantly, instead of relying on computationally expensive large language models (LLMs), this approach leverages efficient, interpretable techniques tailored to the domain-specific structure of public health tables. Our combined method reduces the proportion of structurally erroneous tables from 46.3% to an estimated 9.7–12.6%, improving the semantic alignment and interpretability of model outputs. This work contributes a transparent and scalable pipeline that enhances the trustworthiness of automated table extraction systems, with direct relevance to evidence-based decision-making in public health.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162445</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Opportunities to Reduce Carbon Dioxide Emissions from Electric Arc Furnace Steelmaking in the United States</title>
<link>https://hdl.handle.net/1721.1/162444</link>
<description>Assessing Opportunities to Reduce Carbon Dioxide Emissions from Electric Arc Furnace Steelmaking in the United States
Colcord, Christopher C.
Steel is energy- and CO₂ emissions-intensive to produce, but it is also a crucial material for infrastructure, defense, and the energy transition. This thesis focuses on Electric Arc Furnace (EAF) steelmaking, which accounts for roughly 70% of steel production in the United States. Decarbonization levers for EAF producers are diverse—encompassing energy efficiency (EE) measures, fuel switching, material input substitution, development of onsite carbon-free electricity (CFE) generation, CFE procurement through power purchase agreements (PPAs) or unbundled renewable energy credits (RECs), and negative-emissions credit purchases, among others. We first construct a techno-economic model that analyzes costs and emissions of individual EAF facilities in the United States under a business-as-usual (BAU) scenario for the years 2025 through 2035. We then calculate the Levelized Cost of Carbon Abatement (LCCA) of various decarbonization levers against the BAU counterfactual. We build aggregate LCCA curves to draw insights on least-cost emissions abatement strategies for facilities and opportunities for policy to accelerate decarbonization decisions.&#13;
&#13;
We find that the modeled levers collectively deliver a 46% reduction in EAF CO₂ emissions versus the BAU case—equivalent to a reduction of roughly 1.7% of national industrial CO₂ emissions. Voluntary CFE procurement has the greatest potential to abate EAF emissions, but comes with large uncertainties. Onsite CFE and PPAs have negative LCCAs in most cases, whereas unbundled RECs have positive LCCAs. EE measures provide modest emissions reductions and costs are negative on a levelized basis under a wide range of assumptions. EE opportunities, onsite CFE, and PPAs may be bound by non-financial constraints. Direct reduced iron (DRI) with carbon capture has lower variable costs and produces fewer emissions versus hydrogen-based DRI in most cases. While the challenges to decarbonize EAF steelmaking are immense, we find EAF facilities can take actionable steps in the near term—supported by federal and state policies—to abate carbon emissions while reducing levelized costs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162444</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Capacity of Generative AI to Learn&#13;
Genotype-by-Environment Interactions in Brachypodium&#13;
distachyon</title>
<link>https://hdl.handle.net/1721.1/162443</link>
<description>Investigating the Capacity of Generative AI to Learn&#13;
Genotype-by-Environment Interactions in Brachypodium&#13;
distachyon
Neufeldt, Charlie
Climate change exacerbates environmental stressors such as drought, challenging the resilience of agricultural systems and highlighting the need to understand plant genomic architecture and its responses to such environmental variation. A key molecular mechanism underlying these responses is transcriptional plasticity: environment-induced changes in gene expression that vary among genotypes, representing one way that genotype-by-environment (GxE) interactions manifest at the molecular level. While transcriptomic data offers a unique and powerful view into these responses, traditional modeling approaches often rely on linear assumptions, limiting their ability to detect complex, nonlinear patterns of regulation. This thesis investigates whether generative machine learning modeling, specifically the use of transformers, can extract biologically meaningful representations of gene expression dynamics in plants. Inspired by the successes of the scGPT model for human genomics, I developed and trained a compact transformer architecture, the PlantGeneEncoder, on bulk RNA-seq data from two natural accessions of Brachypodium distachyon grown under drought and control conditions. The model was trained on binned expression values using both a baseline configuration and a set of regularized variants incorporating noise injection, co-expression preservation, entropy-based sample weighting, and masked gene modeling as a self-supervised objective. While baseline models achieved perfect reconstruction accuracy, they failed to preserve meaningful biological structure in the latent space. Regularized models achieved a better trade-off, maintaining high reconstruction fidelity while demonstrating improved genotype classification performance and modestly better alignment with the original expression structure. However, environmental condition signals remained difficult to capture across all configurations, with classification accuracies only marginally above random chance. These findings highlight the promise and limitations of transformer-based generative modeling for plant transcriptomics and provide a flexible framework for future efforts to model transcriptional plasticity and regulatory responses to environmental stress.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162443</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Multi-View Object Pose Estimation Using a Mobile Ground Robot</title>
<link>https://hdl.handle.net/1721.1/162442</link>
<description>Active Multi-View Object Pose Estimation Using a Mobile Ground Robot
Wynia, Ethan
Accurate 6-DoF object pose estimation remains a central challenge in robotic perception, particularly when relying on single-view observations subject to occlusions and limited geometric cues. This thesis presents a system that incrementally refines object pose estimates by collecting multi-view observations with a mobile ground robot, the Clearpath Jackal. The robot autonomously navigates in a circular trajectory around a target object, capturing images while maintaining a fixed orientation toward the object center. At each waypoint, 2D image corners are manually annotated and paired with corresponding 3D object coordinates. The Perspective-n-Point (solvePnP) algorithm is then applied to estimate the object's pose relative to the camera. The system transforms these camera-centric poses into a consistent global frame using Robot Operating Systems’ transform library. Using these poses, the system tracks reprojection error to evaluate pose confidence. Across multiple trials, the mean reprojection error consistently decreased as more views were added, confirming that spatially diverse observations improve pose estimation accuracy. A cross-run analysis shows reproducible trends, with error reductions of over 40% in many cases. These results validate the efficacy of active multi-view collection for reducing uncertainty and lay the foundation for future extensions with automated key point detection and You Only Look Once (YOLO) supported multi object detection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162442</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of an FSAE Cooling System for Electric In-hub Motors</title>
<link>https://hdl.handle.net/1721.1/162441</link>
<description>Design of an FSAE Cooling System for Electric In-hub Motors
Lohawala, Sehar
This thesis explores the design of water cooled electric in-hub motors for use in high power automotive applications such as Formula SAE Electric student racecars. The main components of the drivetrain of the 4WD electric vehicle are the motor and the motor controller. This thesis focuses on designing a reliable cooling system for the motors to ensure that they operate in an optimal temperature range that increases drivetrain efficiency, prevents catastrophic motor damage, and ultimately improves vehicle performance.&#13;
&#13;
During the design process, extensive heat transfer analysis was conducted for liquid cooling, air cooling, and heat pipes. Additionally, CFD was conducted for various water cooling architectures to determine the influence of coolant flow direction and channel dimensions on cooling performance. Results were subsequently analyzed, plotted, and used in the motor cooling architecture selection process.&#13;
&#13;
After determining the motor cooling sleeve architecture and dimensions, detailed CAD was created for both plastic prototype sleeves as well as metal 3D printed sleeves. Testing demonstrated that the cooling sleeve successfully met its target pressure drop of 3-4PSI. More thorough testing and data collection of the cooling sleeve’s thermal performance is still in progress. While details on the CAD, prototyping, testing &amp; validation, and manufacturing, are outside the scope of this thesis, images are included for reference.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162441</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Open-IRIS: Low Cost, Fully Open Source In-situ Infrared Inspection of Silicon</title>
<link>https://hdl.handle.net/1721.1/162440</link>
<description>Open-IRIS: Low Cost, Fully Open Source In-situ Infrared Inspection of Silicon
Becker, Aaron M.
Short-Wave Infrared (SWIR) imaging has become a powerful and well-known technique over the last two decades for silicon inspection and imaging. Open-IRIS is a low-cost, fully open-source system for in-situ InfraRed Inspection of Silicon devices (IRIS). It is designed to&#13;
lower the cost barrier for academic and research users requiring high-precision IR imaging of silicon microelectronics. This thesis details the design and implementation of the Open-IRIS platform, including its optomechanical components, motion control system, electrical system and software architecture. Its design is highly modular and low cost, making it an invaluable and extensible tool for many future applications, including microarchitectural security research, chip failure analysis, and biological imaging. Key design challenges, such as achieving high mechanical and optical resolution on a budget are addressed. Computational microscopy techniques, including Fourier Ptychographic Microscopy (FPM), are evaluated to improve resolution. The system’s imaging resolution on a standard resolution target is evaluated, as well as its motion repeatability and accuracy. &#13;
&#13;
Results show that Open-IRIS achieves 5.34 μm optical resolution with a 5x objective, and 3.47 μm resolution with a 20x objective. Mechanically, it has 6.5 μm repeatability, and 35.5 μm accuracy, all on a total budget of less then US$1000 - a fraction of the cost of comparable commercial systems. The complete design is fully open-source, enabling broader access to advanced chip inspection&#13;
techniques, and serves as an excellent starting point for future expansion into advanced security research like laser fault injection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162440</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrifying a Jet Ski: Designing &amp; Manufacturing an Electric Drivetrain for a Personal Watercraft</title>
<link>https://hdl.handle.net/1721.1/162439</link>
<description>Electrifying a Jet Ski: Designing &amp; Manufacturing an Electric Drivetrain for a Personal Watercraft
Hudspeth, Blake H.
While many vehicles are well into their transition to electrification, marine vessels are lagging behind; specifically, there are very few electric personal watercrafts (PWC) on the market. Though many engineers and hobbyists have retrofitted combustion engines for cars, motorcycles, and other automobiles for electric propulsion, there are almost no examples of electric conversions for watercraft. This thesis details the designing and manufacturing process of an electric drive train into a 1997 Yamaha Wave Venture 760 Personal Watercraft (PWC). This project aims to serve as a proof of concept for interchanging a combustion engine from an older PWC with an electric motor and Lithium-Ion battery. I performed rigorous calculations to properly size a battery and motor to propel the watercraft. Upon learning to weld, build battery packs, and configure a battery management system, I prototyped extensively—iterating upon battery packs and pack configurations. I assembled a final 72V, 10.4 AH battery pack with waterproof housing to power the 5kW motor that I coupled to the existing impeller drive shaft. Lastly, I performed a dry test monitoring the battery and motor health, confirming successful retrofit of the electric drive train, proving the feasibility in electrifying a used personal watercraft.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162439</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical Analysis of Human-Informed Topology Optimized Lateral-Load-Resisting Systems of Tall Buildings under Seismic Excitation</title>
<link>https://hdl.handle.net/1721.1/162438</link>
<description>Numerical Analysis of Human-Informed Topology Optimized Lateral-Load-Resisting Systems of Tall Buildings under Seismic Excitation
Blaze, Edie
In the construction industry, structural, architectural, and environmental considerations can often be at odds with each other, leading to inefficient structures and, consequently, material waste. Topology optimization has shown promise as one potential solution to this problem, offering designs that are both structurally efficient and aesthetically interesting. However, topology-optimized designs are often difficult to manufacture or do not take into consideration other aspects that are crucial in the construction industry. Human-informed topology optimization, or HiTop, is a previously-developed algorithm that allows users to edit areas of interest, providing a computationally-efficient solution to address concerns with the designs. This paper uses MATLAB to apply HiTop to the design of the lateral-load-resisting systems of tall buildings, comparing results to those of three other designs: a “human” design with standard cross bracing, a optimized design using classical topology optimization, and a previously-developed algorithm which optimizes designs under a sum of modal compliances formulation, similar to how structures are analyzed in seismic codes. The designs are evaluated quantitatively, comparing natural periods, modal displacements, sum of modal compliances using modal decomposition, as well as computation time. They are also evaluated qualitatively, as HiTop is used to modify designs to improve constructability and aesthetics. The HiTop algorithm successfully created manufacturable, aesthetic designs in line with the user’s goals across a range of H/B ratios within a brief time frame. HiTop designs also performed similarly to the classically optimized designs, indicating that modifications to an optimized design to improve manufacturability, aesthetics, or other potential goals of a user do not significantly decrease structural performance under seismic loading.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162438</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Peatland burning identification among other wildfires across different ecozones in Canada</title>
<link>https://hdl.handle.net/1721.1/162437</link>
<description>Peatland burning identification among other wildfires across different ecozones in Canada
Chen, Ming
The unprecedented severity of the 2023 Canadian wildfires highlights growing concerns about the vulnerability of global peatlands—key ecosystems storing substantial amounts of terrestrial carbon. Peatlands, traditionally resistant to burning, are increasingly at risk due to climate-induced warmer and drier conditions. This study specifically investigates the extent and characteristics of peat burning in the 2023 Canadian wildfires based on available remote sensing data. The primary objective is to determine whether fires on peatlands demonstrate distinct fire behavior compared to fires on non-peatland. To achieve this goal, this study utilized statistical tools and machine learning algorithms, including power-law relationship estimates, Mann-Whitney U test, K-means clustering, and generalized additive model (GAM) to identify the contribution of peat presence to fire behaviors. Key findings demonstrate that fires on peatland are significantly more intense, longer-lasting, and associated with higher carbon emissions. Even though peat combustion can not be confirmed without field validations, these results underscore the critical importance of the potential impact of peat on wildfire growth and management. By highlighting the disproportionate impact of peat burning, this study provides a foundation for future research aimed at developing targeted remote sensing techniques and policy responses to mitigate peatland vulnerability and preserve vital carbon stores in the context of global climate change.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162437</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effects of different stiffnesses and coefficient of drag on the performance of a Padel racket</title>
<link>https://hdl.handle.net/1721.1/162436</link>
<description>The effects of different stiffnesses and coefficient of drag on the performance of a Padel racket
Mora Armendariz, Francisco David
The sport of padel is the fastest growing sport in the world. As the sport has evolved there have been more and more innovations in the equipment of Padel. One of the most important was moving from wooden paddles to ones made of fiber glass and carbon fiber. As such a prototype racket was designed and manufactured for this study through experiments to obtain data on existing commercially available rackets, a specific design was proposed and created. The prototype created is then compared to the other rackets in the same way that the original properties were measured. Although this racket needs to continue to be iterated on, the potential of a viable competitive racket is promising.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162436</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tension Activated Kirigami Structures for Interlayer Use in Circular Construction Applications</title>
<link>https://hdl.handle.net/1721.1/162435</link>
<description>Tension Activated Kirigami Structures for Interlayer Use in Circular Construction Applications
Bigler, Thomas
Motivated by the use of fully circular materials and improving sustainability, this thesis investigates the use of tension-activated kirigami as an architected material, for which mechanical properties are determined by their designed structure (geometry) rather than the bulk material itself. These architected materials can be used as a dry replacement for adhesives in construction with glass masonry units, enabling reclaimability and recyclability of both the interlayer material and glass masonry unit. The kirigami design and material selection allow for the customization of architected-material properties for best compatibility with the glass units. The research has involved both analytical/mathematical modeling for early material and design selection and an experimental process to develop an empirical database. Experimentation on different materials, designs, aspect ratios, etc. has provided data to begin extrapolating trends and behaviors of the architected material. These data will allow for design decisions and material selections based on the functional requirements of a specific structure or application.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162435</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable Engineering of Polyethylene Fiber Materials: Advancing Functional Properties of Diverse Textile-Based Structures</title>
<link>https://hdl.handle.net/1721.1/162434</link>
<description>Sustainable Engineering of Polyethylene Fiber Materials: Advancing Functional Properties of Diverse Textile-Based Structures
Huynh, Amy
This thesis explores pathways to circularity for polyethylene-based textiles through an integrated framework that combines material experimentation, systems-level policy analysis, and cultural innovation. Focusing on olefin block copolymer (OBC) filaments—engineered with semicrystalline polyethylene hard segments and elastomeric soft blocks—the study evaluates their mechanical behavior across a range of stitch-based textile geometries. Cyclic and postfatigue tensile testing reveals how formulation and structure shape energy dissipation and durability, informing design strategies for high-performance applications such as intra-vehicular spacesuits and wearable technologies. To understand the broader systems context, the thesis analyzes barriers to integrating recycled polyethylene (rPE) into textile supply chains, identifying economic, legal, institutional, technological, firm-level, and societal constraints. It proposes targeted strategies based on global policy trends, EU case studies, and a geospatial analysis of U.S. recycling infrastructure. Finally, the work explores how generative AI can revitalize traditional craft practices—such as bobbin lace—by co-creating patterns designed for both aesthetic and functional performance in new materials. Together, these efforts propose a model for advancing sustainable textile innovation that bridges material science, circular design, and policy transformation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162434</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Design in Operations</title>
<link>https://hdl.handle.net/1721.1/162433</link>
<description>Experimental Design in Operations
Wang, Chonghuan
Experimental design has been fundamental to many fields, yet applications in operations research (OR) and operations management (OM) bring in complexities, as well as opportunities, that extend beyond classical statistical goals. This thesis discusses why OR/OM should care about experimentation and its design, where the challenges lie in operational and service systems for classical experimental design, and why OR/OM researchers are uniquely suited to address the challenges. More specifically, this thesis advances experimental design by introducing more operational perspectives, addressing two core challenges: incorporating operational objectives and leveraging operational modeling to enhance experimentation.&#13;
&#13;
First, traditional experimental approaches, such as A/B testing, primarily aim at statistical efficiency (e.g., reducing variance or bias). However, OR/OM applications frequently involve additional operational considerations, such as welfare preservation, revenue optimization, risk control, and non-stationarity. We investigate these settings in Chapters 2–4, developing frameworks for multi-objective experimental design. In Chapter 2, we introduce a minimax multi-objective optimization formulation to balance statistical power and welfare loss, derive necessary and sufficient conditions for Pareto optimal solutions, and propose robust multi-armed bandit designs. Chapter 3 extends this approach to pricing experiments, exploring trade-offs between estimating causal effects (price elasticity), maximizing revenue, and controlling tail risks, along with robust statistical inference methods. Chapter 4 addresses non-stationary experimental environments where treatment effects dynamically evolve, designing experiments that optimally balance accurate estimation of changing effects and welfare minimization.&#13;
&#13;
Furthermore, we highlight the substantial value of operational models—particularly Markov Decision Processes (MDPs)—in experimental design. In Chapter 5, we address the challenge of estimating long-term cumulative outcomes, such as customer lifetime value, using short-term experimental data. We develop novel inference methods grounded in MDPs, which effectively bridge short-term data to long-term outcomes. Moreover, by recognizing many real-world treatments tend to be localized for practical efficiency, we introduce novel estimators that leverage the localized structures to achieve substantial variance reductions.&#13;
&#13;
In summary, this thesis underscores how OR/OM contexts uniquely enrich experimental design, offering robust theoretical frameworks and practical solutions to operational challenges, ultimately broadening both the theoretical foundations and the practical impacts of experimentation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162433</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fueling Conflict: A Global Dataset of Energy Protests</title>
<link>https://hdl.handle.net/1721.1/162432</link>
<description>Fueling Conflict: A Global Dataset of Energy Protests
Harrison, Ethan
How do popular grievances about (the lack of) access to energy lead to political violence and instability? I use a mixed-methods approach to answer this question, based on a qualitative case study in Sri Lanka and a quantitative framework for tracking energy protests worldwide. Specifically, through an analysis of the 2022 Aragalaya protest movement in Sri Lanka, I elaborate on how breakdowns in state capacity to provide energy to its citizens can trigger civilian unrest. Building on this case study, as well as insights from the empirical literature on the drivers of instability related to energy access, I then pilot a machine learning (ML) framework to identify energy-related protest events in the Armed Conflict Events Database (ACLED) based on context-specific keywords, which results in the creation of the first global dataset on energy protests. This novel source of evidence, in turn, will open new avenues for research on the conflict-energy nexus, particularly on the impact of market shocks on civilian unrest and instability in low- and middle-income countries – a topic for which current empirical work is limited. I show how the ML framework I develop here can be used to enable continuous monitoring of protest activity related to energy access, as well as how the framework can be extended to other forms of political violence, offering a promising tool for peace-building initiatives across contexts. Therefore, such a framework could inform key evidence to support policymakers, practitioners, and researchers in the design of strategic policies that facilitate the provision of energy while mitigating the risk of conflict and instability worldwide, particularly in "energy-poor" countries.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162432</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Circular Lunar Economy: Incentivizing the Design of Multi-Purpose Reusable Lunar Landers and Rovers</title>
<link>https://hdl.handle.net/1721.1/162431</link>
<description>Towards a Circular Lunar Economy: Incentivizing the Design of Multi-Purpose Reusable Lunar Landers and Rovers
Khan, Nadia Rehman
Both NASA and ESA have committed to establishing a lasting presence on the Moon by 2030. However, lunar surface debris has already exceeded 200,000 kg, prompting concerns about the environmental, operational, and economic viability of future missions. This thesis proposes that circular economy principles—particularly reusability, modularity, and interoperability—must be embedded in early mission architecture to reduce waste, improve system longevity. To evaluate these goals, this thesis introduced a novel decision-support framework, the Lunar Exploration Impact Assessment (LEIA), alongside a policy-informed set of Lunar Surface Sustainability Guidelines (LSSG). Both decision support tools were designed help mission designers and space policy stakeholders to incentivize the design of resilient reusable lunar landers and rovers. In this thesis the LEIA framework, was applied to two case studies: NASA JPL’s EnduranceA autonomous lunar sample return rover, and ESA’s multi purpose Argonaut lander to evaluate the sustainability of each spacecraft after the EOL/M phase of each mission. Scores were computed using a Multi Criteria Decision Analysis (MCDA) approach. Seven Impact Assessment Indicators (IAI)s were considered, to assign a sustainability rating for each mission: Cost-effectiveness, environmental impact, science value, redundancy, resilience, strategic value, and technological feasibility. The Endurance-A mission achieved a sustainability score of 66.4%, based on a sample collection post primary mission scenario, indicating moderate sustainability across some categories such as cost-effectiveness 18.9% and technological feasibility 12%. However, the environmental impact score was limited to 7.7%, due to the out-gassing and launch emissions associated with the SpaceX Starship lander. The rovers redundancy and maintainability ratings also constrained the overall sustainability rating – highlighting a gap in the availability of tools suitable for EVA-based repairs on the lunar surface. Subsystems most at risk of degradation—mobility, thermal, and power—require enhanced design for long-term reuse scenarios. Each of these factors were made salient through the Argonaut case study, indicating that in the short to medium term in order to prevent the accumulation of lunar surface debris lunar rovers and landers must be designed to be more resilient to the conditions of the lunar environment. To supplement the LEIA framework, a set of policy recommendations were developed in order to address the lack of End of Life (EOL) procedures in place to manage lunar surface debris – in the form of retired lunar missions. The guidelines detailed how economic policy mechanisms adopted in circular economy systems could be leveraged to incentivize the design of sustainable lunar surface missions and operations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162431</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beam Mechanism Failure in Multistory Steel Frame Structures</title>
<link>https://hdl.handle.net/1721.1/162430</link>
<description>Beam Mechanism Failure in Multistory Steel Frame Structures
Hashbarger, Brad
Engineers must ensure that building structures are not in danger of collapse, so analyses always include safety factors that create redundant yet materially inefficient buildings. This has been common practice for most of structural history, but today, growing concerns for carbon emissions force designers to cut material usage while retaining the same level of safety. Processes opt into one of two processes: an overall lighter unit or stiffening specific internal systems to encourage a load path. The problem with either of these options lies in progressive collapse in the event of structural damage. If one column is lost, stresses propagate either until equilibrium or a larger collapse occurs. Progressive collapse remains a popular research area to identify specific vulnerabilities, often with numerical models for a visualization of each stress state and redundant capacity. Previous studies used analytical and experimental performance to observe the critical effects of losing an external versus internal column and the role of other components, such as joints, joists, and composite slabs, to carry additional loads. However, designs and analyses are bound by assumptions that govern model behavior. To understand the sensitivity and limits of these assumptions, this thesis predicts the performance of steel moment-frame structures of varying bay geometries, proposing deflection fields to inform modern practice in all phases of project development. Instead of numerical simulations, the process follows an analytical approach based in the fundamental methods of equilibrium and the conservation of work and energy. By designing sections for their elastic capacity, their operational performance is directly linked to their failure response. This suggests the dominance of design preferences in stability, even with changes in beam spans or floor loading. Results support an optimal span ratio for plasticity under two-way load distributions that favors bay geometry ratios (L1/L2) between 1 and 2 but varies based on failure locations and how many columns have been lost. This also emphasizes the weaknesses out of plane as span ratios range from 0.5 to 1. Project layouts can utilize the free strength provided by bay geometries as part of the structural design process. If large deflections or span lengths are expected, beam depth and section thickness should increase together to ensure beams utilize their full plastic capacity to achieve additional redundancy from catenary action. Overall, the thesis demonstrates that such considerations in the early design stage can enable steel structures to achieve greater safety with less material.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162430</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design and Fabrication of a Punch and Die System for 2.008 Thermoformed Parts</title>
<link>https://hdl.handle.net/1721.1/162429</link>
<description>The Design and Fabrication of a Punch and Die System for 2.008 Thermoformed Parts
De Jesus, Sebastian
As a student in 2.008, Manufacturing and Design II, our team successfully manufactured 100 identical yo-yos. Although the class is very well structured and the CNC milling, injection molding, and thermoforming in the Laboratory for Manufacturing and Productivity (LMP) were all optimized for the class, the punch and die system was one process that was more tedious than the rest. Punches, a die, and a calibration piece were designed, fabricated, and tested to find the best clearance size and fill in the gap in documentation of punching plastics. A new, working system was successfully fabricated and assembled, and clearance size 5% was determined to have a lower margin of alignment error. The new punch system will be implemented in the LMP and used by 2.008 students.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162429</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating Aboveground Biomass (AGB) Throughout the Pacific</title>
<link>https://hdl.handle.net/1721.1/162428</link>
<description>Estimating Aboveground Biomass (AGB) Throughout the Pacific
Domingo-Kameʻenui, Joy P.
Aboveground biomass (AGB) is a signiﬁcant carbon pool in forests, making AGB a good indicator of forest health and carbon storage. AGB has been studied on multiple scales, in which allometric equations were developed to ﬁnd relationships between AGB and tree parameters. However, despite the presence of AGB studies for speciﬁc sites in the Paciﬁc Islands, there is a lack of AGB comparative studies or data syntheses focused on the Paciﬁc Islands as a whole. This study synthesized data on AGB, tree height H, land cover, and Paciﬁc Island forest community to develop allometric equations using linear and polynomial regression models for trees in the Paciﬁc based on H as the main parameter. This study found polynomial relationships between AGB and H for shrub and herbaceous covers. Speciﬁcally, AGB = 1.76 H^2 + -51.01 H + 346.53 for shrub cover (adjusted R^2 = 0.94, n = 39), and AGB = 1.11 H^2 + -81.97 H + 1167.20 for herbaceous cover (adjusted R^2 = 0.71, n = 79). However, future research and data collection would be necessary to develop allometric equations for tree cover and barren land cover. No signiﬁcant correlation was found between AGB and H for Paciﬁc Island forest community. This study may help with forest management and conservation practices, along with carbon sequestration and storage practices in forests, in the Paciﬁc Islands. This study may also contribute to Paciﬁc-led climate change mitigation and adaptation methods and initiatives.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162428</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI-powered Data Mining for the Development of Sustainable Concrete Materials</title>
<link>https://hdl.handle.net/1721.1/162427</link>
<description>AI-powered Data Mining for the Development of Sustainable Concrete Materials
Duan, Yifei
Data mining has become essential to contemporary industrial and scientific research, playing a pivotal role in uncovering insights from large-scale industrial datasets and literature collections. The sustainable transition of the concrete industry, a major contributor to global CO₂ emissions, demands both operational optimization and scientific innovation. This thesis presents comprehensive data mining frameworks for both industrial and literature source data to support the development of more sustainable concrete materials. Focusing on concrete manufacturing, we develop AI-powered methodologies tailored to real-world industrial data and complex scientific literature. For industrial data mining, we propose to incorporate interpretability and realistic engineering design scenarios to enhance the reliability of both predictive and prescriptive modeling of concrete mixes containing supplementary cementitious materials (SCMs). A domain-informed amortized Gaussian process and a shallow multi-layer perceptron (MLP) are shown to possess superior scientific consistency in predicting time-varied compressive strength, and time-invariant slump and air content properties, respectively. The explainable surrogate property models are applied in mix design optimization under a variety of realistic scenarios considering different engineering design requirements and SCM costs and densities. The importance of the comprehensive property constraint set is demonstrated in comparison against a baseline using only 28-day strength constraint which results in unreasonable property values. The necessity to differentiate realistic scenarios is also highlighted through the differences of optimized mixes and their production costs and climate impacts. Higher design strength, higher design slump, lower design air content, higher SCM density, and higher SCM unit cost can drive up the production costs. Though stratification patterns in the production costs of optimized mixes are observed across different scenarios, the mix-wise climate impacts are not clearly stratified, indicating that substantial emission reduction can be achieved without significantly increasing costs, regardless of the realistic scenarios. For literature mining, a novel method that finetunes lightweight large language models (LLMs) (pythia-2.8B) with multichoice instructions is developed. With the multifaceted linguistic complexity of communication within the domain rendering it infeasible to adopt the conventional named-entity-recognition approach, the new method successfully achieves great information inference accuracy in a time-, cost-, and computation-efficient manner, outperforming the GPT-3.5 in-context learning baseline by over 20%. A knowledge graph is constructed with the literature-mined data, offering insights to promote alternative material substitution strategies in concrete production as the current commercial SCMs are not comprehensively sustainable in the longer term. Statistical summary and temporal trend analyses are adopted to provide both static and dynamic insights into the research landscape. Although SCMs have remained a research hotspot, results revealed a systematic shift in recent studies from commercial SCMs to other materials. Geopolymer and fine aggregate studies have surged in the recent period, while clinker feedstock and filler studies have declined. A node similarity metric is modified to develop a model-free link prediction algorithm, enhanced with random graph perturbation for robustness and uncertainty quantification. Through link prediction, the currently underexplored lime-pozzolan cement application emerges as a potentially promising future research direction.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162427</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metabolic Scaling Analysis of Building Energy Efficiency: A Case Study of Massachusetts Institute of Technology</title>
<link>https://hdl.handle.net/1721.1/162426</link>
<description>Metabolic Scaling Analysis of Building Energy Efficiency: A Case Study of Massachusetts Institute of Technology
Hsu, Yu-Hsuan
The building sector plays a critical role in global energy consumption and carbon emissions, accounting for 21% of global GHG emissions (12 GtCO₂-eq) and 31% of global final energy demand (128.8 EJ) in 2019 (Cabeza et al. 2022). This reality underscores the urgent need to enhance energy efficiency within the sector. This research applies ecological metabolic scaling principles to building energy analysis, utilizing the Massachusetts Institute of Technology (MIT) campus as a case study. Analogous to biological systems, where an animal’s metabolic rate scales to 3/4 power of its mass, our findings indicate that larger buildings, similar to larger organisms, are inherently more energy efficient.&#13;
Furthermore, an analysis of overall energy consumption at MIT from 2009 to 2020 reveals a steady decline, though not proportionally, as the scaling exponent fluctuated with a decreasing trend (&lt;3/4), indicating improved efficiency in larger buildings. However, the COVID-19 pandemic in 2020 acted as a major shock, disrupting this trend. This disruption was likely driven by operational and behavioral changes, including reduced occupancy, increased remote work, and adjustments to ventilation and heating systems to ensure health and safety. These shifts highlighted the system’s tendency to return to the baseline scaling exponent of 3/4, demonstrating regression to the mean and ultimately pushing efficiency back to its prior baseline level of 25%.&#13;
Additionally, the study includes case analyses of specific buildings on the MIT campus to provide deeper insight into comparative energy performance. While several guidelines for energy systems have been proposed, certain limitations remain. Future research should focus on expanding the dataset to help validate the applicability of these findings to other contexts while also accounting for variations in building types. Ultimately, this study aims to facilitate the development of more effective policies and innovations in building energy management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162426</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Strain-Rate Responses of Mechanical Metamaterials</title>
<link>https://hdl.handle.net/1721.1/162425</link>
<description>High Strain-Rate Responses of Mechanical Metamaterials
DesRoberts, Collin G.
Mechanical metamaterials—materials with deterministic microstructures that attain unique combinations of properties—have revolutionized the parameter space of engineering materials over the last decade. While their quasi-static mechanical responses have been&#13;
thoroughly characterized, their responses in the dynamic regime are not fully understood, especially at strain rates above 10^3 s^−1. Using microscale uniaxial compression and custom microscale Kolsky bar capabilities, we uncover the strain rate dependence of mechanical&#13;
metamaterials over eight orders of magnitude, ranging from strain rates of 10^−3 to 10^5 s^−1. Herein, we describe the development and execution of direct impact experiments using a custom-built micro Kolsky bar set-up, delving into the details of its design, fabrication, and data analysis. We first characterize the rate dependence of the polymer used for sample fabrication, IP-S, and relate it to the responses of different metamaterial morphologies at the same strain rates. The results of these experiments uncover how geometry greatly affects rate dependence of mechanical properties in the dynamic regime. Understanding the high strain rate behavior of metamaterials is necessary to ensure reliable performance in real-world applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162425</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases</title>
<link>https://hdl.handle.net/1721.1/162424</link>
<description>Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases
Grewal, Darshdeep
The urgent global transition to renewable energy is constrained by the intermittent nature of solar and wind sources, highlighting the critical need for scalable energy storage solutions. This thesis presents a comprehensive investigation into the development of structurally integrated supercapacitors based on carbon-doped cement composites, known as EC3 cells. These multifunctional materials combine structural performance with electrochemical energy storage capabilities, enabling integration directly into civil infrastructure. The research focuses on three essential challenges for real-world deployment: (1) replacing laboratory acrylic casings with hydrophobic sealants compatible with cementitious systems, (2) quantifying and mitigating shrinkage and swelling in nanocarbon cement matrices under electrolyte exposure, and (3) identifying corrosion-resistant current collectors that maintain conductivity and mechanical durability under harsh conditions. Bitumen-based coatings were found to be promising sealants for moisture containment. Shrinkage studies [ are underway, I will complete this part shortly]. Meanwhile, corrosion testing of various collector materials revealed that graphene sheets and stainless steel–reinforced graphillic papers offered optimal trade-offs between conductivity, corrosion resistance, and mechanical performance. The thesis concludes with two field-implementation design proposals—a vertical column and a vaulted arch—both of which leverage compression to improve electrochemical contact and stability. Altogether, this work establishes a foundational framework for embedding energy storage directly into the built environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162424</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feedback Controller Design For Low-Power Autonomous&#13;
Blimp Robot</title>
<link>https://hdl.handle.net/1721.1/162423</link>
<description>Feedback Controller Design For Low-Power Autonomous&#13;
Blimp Robot
Ilerbaig-Bajona, Pau J.
Low-power autonomous robots are able to do things that humans cannot do alone, ranging from small robots that traverse through the human body or other tight spaces to surveillance and monitoring robots that perform extended missions. Navigation is a key power draw for these robots, and thus, their motion planning must be designed to minimize power usage. In order to test motion planning algorithms, a low-power autonomous blimp robot that uses buoyancy to reduce power requirements for three-dimensional movement was designed. The blimp has two forward-facing rotors and two vertically-facing rotors. Due to the underactuated nature of the robot, the blimp cannot be translated sideways, thus leading to a coupling of the blimp’s two horizontal degrees of freedom. To control this, we present a controller for the blimp robot with three separate PID control loops: one for altitude control, one for angle control, and one for proximity approach. Additional fuzzy logic is implemented to improve performance and limit inefficiencies in the dynamic system and controller, such as turning towards the goal first before approaching forward. Combining the PID control loops and fuzzy logic allows for movement from a start point to a goal point, remaining within a 0.3 m radius of the goal point once it is reached. Further work that can be done to improve the physical system and controller is discussed, such as balancing the blimp gondola and rotors, as well as implementing different, physics-based controllers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162423</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preserving Human Autonomy in AI-Mediated Negotiations</title>
<link>https://hdl.handle.net/1721.1/162422</link>
<description>Preserving Human Autonomy in AI-Mediated Negotiations
Chen, J. Alvin
The rapid integration of generative artificial intelligence (AI) into negotiation and conflict resolution processes raises critical ethical concerns about the erosion of human autonomy, particularly when AI systems navigate irreconcilable “sacred” values (non-negotiable moral principles) alongside transactional “mundane” interests. This thesis investigates whether generative AI can be designed to recognize and respect important values and beliefs while preserving human agency in decision-making. Drawing on datasets from a repository of large language model (LLM) prompts tested in simulated negotiation scenarios, this study employs a mixed-methods approach to evaluating AI’s efficacy in balancing efficiency with ethical imperatives in negotiation. Quantitative metrics (enumerating the outcomes of two-party negotiations) are analyzed alongside qualitative assessments of values such as transparency and consent, drawn from Kantian ethical frameworks.&#13;
&#13;
My analysis reveals that while AI negotiating bots excel in trades across mundane, tradable interests they struggle to navigate beliefs and values without oversimplifying moral reasoning or obscuring cultural considerations. These findings inform policy recommendations, including a call for human-in-the-loop validation and technical safeguards for protecting important values in efforts to incorporate AI-assistance into negotiations. By bridging technical analysis and ethical theory, I hope this research contributes to improvements in designing autonomy-preserving AI systems for use in a range of negotiating settings, prioritizing human dignity alongside computational efficiency.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162422</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Junctions and Strands: Breaking Property Tradeoffs in Polymer Networks and Composite Polymer Electrolytes</title>
<link>https://hdl.handle.net/1721.1/162421</link>
<description>Junctions and Strands: Breaking Property Tradeoffs in Polymer Networks and Composite Polymer Electrolytes
Herzog-Arbeitman, Abraham
This dissertation first examines the mechanics of polymer networks, specifically material toughness and the nature of material fracture. Polymer networks, which include tire rubber, windmill turbine blades, tissue engineering scaffolds, polymer electrolytes (vide infra) and many other materials, possess a useful lifetime typically limited by a fracture event. Thus, methods of controlling toughness (the resistance of a material to tearing) without compromising composition or other properties would dramatically affect waste generation and energy use in the myriad applications in which polymer networks are employed. Toughness in rubbery polymer networks derives from the length and density of the polymer strands; thus, it is generally inversely related to stiffness, as captured in the classic Lake-Thomas theory. This inverse relationship has been perturbed through incorporation of forceresponsive molecules (mechanophores) that may either toughen or weaken the material depending on network construction and topology. The first part of this thesis identifies a new class of mechanophores called tetrafunctional cyclobutanes (TCBs), which can be used to either toughen or weaken a network of single topology without substantial change of network composition, even in dilute gels which are difficult to toughen by other methods. TCBs are then used to identify the mechanisms of mechanophore toughening or weakening in other networks, through a proposed topological metric called network strand continuity (NSC). We show that TCB substituents control the regio- and chemo-selectivity of the cyclobutane core under stress, and this molecular-level selectivity is responsible for network toughening or weakening on the macroscale. These effects can be predicted based on knowledge of activation energetics of the junction guided by NSC. Subsequently, effects of other network structure parameters on the magnitude of toughening or weakening are considered and the molecular design of second-generation highly active TCBs is described. The second part of this dissertation concerns the design of microporous polymer electrolytes and the applications of their composites and gels in batteries. Polymer electrolytes are a highly anticipated alternative to the liquid electrolytes currently in use, which are toxic, flammable, and incompatible with next-generation battery chemistries. Previous polymer electrolytes exhibit inadequate conductivity and a severe tradeoff between conductivity and mechanical properties. These challenges are accentuated in single-ion conductors, which are theorized to have the strongest rate capability. A new class of single-ion conducting polymer electrolytes that mimics the conduction mechanism of ceramic electrolytes to achieve strong mechanical properties, high conductivity, processability, stability, and recyclability is described. These polymers constitute the first regular microporous polyanions, and the most dissociative microporous polyanions to date. These polymers, alongside other rigid (but not microporous) polysulfonimides enable strong conductivity performance when coupled with suitable dopant (here succinonitrile) in low weight fractions. Low molecular weight controls flexible polymer analogs show inferior mechanical and conductivity properties. In fact, microporous composites outperform even liquid analogs. These composites show best-in-class combinations of mechanical and conductivity properties, and can even conduct divalent cations like Zn(II), a challenging but energy-dense battery metal. Simulations show that polymer-succinonitrile interactions enable fast conduction at the pore edge which results in synergistic behavior.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162421</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Testing Soccer Cleats Designed to Reduce ACL Tears</title>
<link>https://hdl.handle.net/1721.1/162420</link>
<description>Testing Soccer Cleats Designed to Reduce ACL Tears
Sandell, Remi
One of the most common lower extremity injuries in soccer are ACL tears which have a long recovery time of 6-13 months and have a high risk of reinjury. The majority of ACL tears are non-contact and the mechanism behind them has been explained to be related to landing flatfooted. This provides an opportunity for engineers to design cleats to reduce the risk of these injuries. The company HBN shoes designed a soccer cleat aimed at reducing ACL tears by increasing the flexibility of the cleat around the metatarsophalangeal joint. The idea behind this is that increased flexibility at this joint will decrease landing flat-footed while running, reducing injury risk. This study evaluated the ability of the shoe to decrease flat-foot running and flatfooted landing while passing the ball using Fscan pressure sensors and GoPro cameras. For the majority of the participants in the study, the HBN cleats had a reduced peak heel force during running compared to the test cleats. When passing the ball wearing the HBN cleats, the majority of participants had a lower percentage of the planting step in a flat-footed position compared to the control cleats. This indicates that the HBN cleats could be effective in reducing flat-footed running and landing in athletes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162420</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Race Car Reverse Gear Design</title>
<link>https://hdl.handle.net/1721.1/162419</link>
<description>Race Car Reverse Gear Design
English, Ashley E.
This thesis presents the design, simulation, and installation of a reverse gear system for the RUSH SR, a lightweight, motorcycle-engine-powered race car that lacks built-in reverse capability. The proposed solution repurposes a high-torque automotive starter motor to drive the car in reverse through engagement with a custom ring gear on the rear differential. Analytical modeling and time-domain simulation were used to evaluate performance under varying loads, including the effect of incline angle on terminal velocity and motor current draw. Simulated results show that the system can reliably move the car in reverse on slopes up to 10° before stalling, with current draw remaining within safe operational limits. The mechanical design includes a new differential carrier, gear coupler, and ring gear, while the electrical system explores both off-the-shelf and custom high-side switching controllers to manage power and solenoid activation. The final hardware was bench tested and installed on a working vehicle. Recommendations for future validation include current-limited incline testing and dynamic vehicle response trials. This modular and cost-effective system demonstrates a practical solution to a common race car limitation while preserving the RUSH SR’s lightweight performance characteristics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162419</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Characterization of a Pressure Controller Test&#13;
Stand for Soft Pneumatic Actuator Testing</title>
<link>https://hdl.handle.net/1721.1/162418</link>
<description>Design and Characterization of a Pressure Controller Test&#13;
Stand for Soft Pneumatic Actuator Testing
Comiskey, Evan L.
Soft pneumatic actuator design has been of interest to the soft robotics community in recent years for their potential to expand the roles of robotics systems in human-interacting medical devices and delicate manufacturing processes. Unfortunately, it is difficult to analytically predict the degradation behavior over time of soft pneumatic actuators, and to compare such behavior in a repeatable manner between different actuator designs. To gather such degradation data experimentally, this thesis presents an open-source proportional-integralcontrol (PID) control test stand system with only off-the-shelf components, in order to create a tool for standardized and repeatable characterization testing of soft pneumatic actuator life-cycle behavior. The system is open-source and modular, allowing researchers in the MIT Fabrication-Integrated Design Lab (FIDL) as well as other soft robotics researchers to customize the program and associated hardware as necessary for their own soft pneumatic actuator research inquiries. This thesis explores the design of both the hardware and software of this test stand system, which is informed by the system’s functional requirements, and additionally explores the PID controller’s capabilities and limitations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162418</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Effective System Architectures for Cislunar&#13;
Space Situational Awareness</title>
<link>https://hdl.handle.net/1721.1/162417</link>
<description>Characterizing Effective System Architectures for Cislunar&#13;
Space Situational Awareness
Rude, Connor D.
Achieving Space Situational Awareness (SSA) in the Cislunar region—the area between the geosynchronous belt and the Moon's gravitational boundary—poses significant technological and organizational challenges. Instead of proposing new theoretical systems, this thesis employs the Architecting Innovative Enterprise Strategy (ARIES) Framework to evaluate existing SSA architectures and previously suggested solutions. ARIES provides a structured assessment through its elements (strategy, information, infrastructure, products, services, processes, organizations, and knowledge), identifying infrastructure, acquisition strategies, policy-driven timelines, and communication structures as key areas for improvement. Stakeholder objectives, current initiatives, and operational needs guide the characterization of an ideal SSA architecture.&#13;
&#13;
Four prior system proposals for cislunar SSA are assessed using qualitative analysis of existing literature and first-order physics-based simulations. These evaluations correlate specific design features with enhanced system suitability. Particularly beneficial are constellation proximity to targets, strategic constellation placement and phasing, sensor orbital diversity, and orbital stability. Additionally, certain design strategies consistently yield higher suitability, including focusing on underserved SSA regions, leveraging heritage technology, and optimizing designs for ride-share launch compatibility.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162417</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Visual and Haptic Feedback Systems on User Performance with a Hand-Held Robot</title>
<link>https://hdl.handle.net/1721.1/162416</link>
<description>Exploring Visual and Haptic Feedback Systems on User Performance with a Hand-Held Robot
Shiferaw, Ruth
While robotic systems allow users to maintain accuracy in high-precision environments, achieving intuitive control is challenging without real-time feedback. Haptic feedback, which communicates otherwise unfelt sensations through vibrations, is widely used in consumer technologies such as video games and smartphones. However, in contexts where knowing the precise force applied by the robot is critical—such as medical procedures or hazardous environments—haptic cues alone may provide insufficient resolution, increasing user workload. Visual feedback, by contrast, is more commonly used and offers greater versatility and precision.&#13;
This study compared the impact of visual feedback (a color-changing LED light strip) and haptic feedback (vibrations in a controller) on user performance in a “fragile object” manipulation task. Nine participants completed the task under four feedback conditions: no feedback, visual feedback, haptic feedback, and combined visual-haptic feedback. Subjective ratings showed that most participants preferred modalities that included visual cues, citing lower perceived workload and clearer force awareness. However, despite some participants reporting minimal benefit from haptics, performance metrics revealed that for others, haptic feedback meaningfully supported task success.&#13;
These findings suggest that while simple visual indicators, such as green-yellow-orange-red light strips, provide accessible and interpretable force feedback, the integration of haptic cues can enhance performance by offering complementary real-time force information. Future designs may benefit from refining both modalities to balance intuitiveness, resolution, and user comfort, especially in applications requiring precise force modulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162416</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>LLM-Supported Natural Language to Bash Translation</title>
<link>https://hdl.handle.net/1721.1/162415</link>
<description>LLM-Supported Natural Language to Bash Translation
Westenfelder, Finnian Ellis
The Bourne-Again Shell (Bash) command-line interface for Linux systems has complex syntax and requires extensive specialized knowledge. Using the natural language to Bash command (NL2SH) translation capabilities of large language models (LLMs) for command composition alleviates these issues. However, the NL2SH performance of LLMs is difficult to assess due to inaccurate test data and unreliable heuristics for determining the functional equivalence of Bash commands. We present a manually verified test dataset of 600 instruction-command pairs and a training dataset of 40,939 pairs, increasing the size of previous datasets by 441% and 135%, respectively. Further, we present a novel functional equivalence heuristic that combines command execution with LLM evaluation of command outputs. Our heuristic can determine the functional equivalence of two Bash commands with 95% confidence, a 16% increase over previous heuristics. Evaluation of popular LLMs using our test dataset and heuristic demonstrates that parsing, in-context learning, in-weight learning, and constrained decoding can improve NL2SH accuracy by up to 32%. Additionally, we consider military use cases for NL2SH models and discuss the limitations of current Department of Defense documentation standards for LLMs. We write and publish documentation for our models and datasets to promote safe use. Our findings emphasize the importance of dataset quality, execution-based evaluation, translation method, and proper documentation for advancing NL2SH translation and enabling responsible use. Our code is available at https://github.com/westenfelder/NL2SH.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162415</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abiotic and Biotic Polymer Degradation to Inform Sustainable Design</title>
<link>https://hdl.handle.net/1721.1/162414</link>
<description>Abiotic and Biotic Polymer Degradation to Inform Sustainable Design
Tantawi, Omar
As global plastic production continues to rise, understanding environmental processes governing plastic degradation is crucial to inform the sustainable design of polymers. This thesis is structured into three chapters, each addressing critical aspects of polymer degradation:&#13;
In the first chapter, I develop and apply a sequential abiotic (photodegradation and hydrolysis) and biotic degradation test to a diverse suite of 18 polymers, including novel polyhydroxyalkanoates polyesters, commercially available bio-based polymers (e.g., polylactic acid, poly-3-hydroxybutyrate), and conventional fossil-derived polymers (e.g., polypropylene, polyethylene terephthalate). Results illustrate that current biodegradation standard methods relying only on mineralization underestimate polymer degradation by up to two-fold. Simulated sunlight notably enhanced polymer degradation by mobilizing dissolved organic carbon (DOC), which proved highly biodegradable in marine environment. Chemical structural differences were clearly linked to degradation behaviors, emphasizing the utility of the developed workflow for rapidly identifying environmentally relevant degradation mechanisms, which can inform structure-property relationships for future polymer designs.&#13;
In the second chapter, I delve deeper into characterizing polymer-derived dissolved degradation products. Conducting Mass Remainder Analysis (MARA) using non-target liquid chromatography–high-resolution mass spectrometry (LC-HRMS) data, we systematically identified oligomeric degradation products and homologous series of polyamide-6 (PA6), polycaprolactone (PCL), and polylactic acid (PLA). Complementary experimental approaches (retention-time shifts across varied mobile phase pH, fragmentation analysis, and spectral matching) were essential to improve structure elucidation and determine acid-base properties (pKa) and hydrophobicity (logKow and logD). The experimental findings emphasized large deviations of oligomers hydrophobicity from computational predictions, underscoring the necessity for oligomer-specific experimental data to enhance environmental fate modeling and risk assessment accuracy.&#13;
In the third chapter, I investigate the fate of polymer-derived dissolved organic carbon (p-DOC) from PLA, PCL and PA6, focusing specifically on oligomer chemistry. Using natural marine microbial communities, PLA- and PCL-derived DOC demonstrated rapid biodegradation (82-85% within six days), while PA6-derived DOC exhibited resistance. Detailed analysis using high-resolution mass spectrometry and MARA revealed significant chemical structure dependence in biodegradation rates, with rapid degradation of aliphatic ester-containing cyclic and linear oligomers. Larger cyclic oligomers degraded faster, while short linear oligomers showed transient accumulation followed by degradation. PA6 oligomers exhibited limited biodegradability, with cyclic oligomers showing minimal degradation. The results emphasize the critical influence of oligomer chemistry and microbial enzymatic specificity, providing essential insights for designing sustainable polymers compatible with marine environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162414</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation-Based Reinforcement Learning Policy Optimization for&#13;
Tactile Manipulation: A Case Study on the Eyesight Hand</title>
<link>https://hdl.handle.net/1721.1/162413</link>
<description>Simulation-Based Reinforcement Learning Policy Optimization for&#13;
Tactile Manipulation: A Case Study on the Eyesight Hand
Chang, Ethan
Robotic manipulation remains a complex and unsolved challenge due to the need for adaptability across diverse objects and tasks. In this work, we explore how to train effective manipulation policies using reinforcement learning in simulation for the Eyesight Hand: a novel, low-cost, tactile-enabled robotic hand. We implement a range of experiments in MuJoCo to evaluate the impact of controller types, observation spaces, reward formulations, and curriculum strategies on policy performance. Our findings highlight the benefits of delta position control, a carefully selected observation space including joint states, control vectors, object pose, and contact forces, and success-driven curriculum learning. Our study establishes baseline strategies for training robust, tactile-based policies on this in-house hardware.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162413</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designer-User Validation of a Method for Predicting Cognitive Interventions for Sustainable Product Design</title>
<link>https://hdl.handle.net/1721.1/162412</link>
<description>Designer-User Validation of a Method for Predicting Cognitive Interventions for Sustainable Product Design
Ladolcetta, Mia
Life cycle assessments to measure the environmental impact of products is often limited by a lack of information about the use phase of a product. To aid this gap by encouraging sustainable user behavior from the early brainstorming and prototype phases of product development, principles of environmental psychology and sustainable product design can be linked through determinants of pro-environmental behavior and cognitive interventions included as product features. A study will be completed to validate these relationships by surveying participants on their determinants, grouping participants into personas that share similar determinant ratings, then pitching sketch prototypes developed with corresponding cognitive interventions and measuring their receptiveness to each sketch prototype. To aid in the development of this sketch prototype evaluation survey, a series of user interviews were conducted with the sample use case of factors that encourage the use of reusable water bottles to better understand how users may evaluate the products depicted in the sketch prototypes. Validating the links between determinants and cognitive interventions can assist designers in determining a hierarchy in the determinants they design for and more efficiently select effective cognitive interventions for users with different experiences.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162412</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperspectral Remote Sensing for UXO Detection and Damage Assessment on Airfield Pavements</title>
<link>https://hdl.handle.net/1721.1/162411</link>
<description>Hyperspectral Remote Sensing for UXO Detection and Damage Assessment on Airfield Pavements
Pietersen, Randall
If an airfield being operated by the U.S. Air Force is attacked, the current method for assessing its condition is a slow visual and manual inspection process, exposing personnel to dangerous conditions and delaying repair operations. Developing a fully autonomous remote assessment solution would improve the speed and safety of this critical task, but remains an unsolved problem despite continued advances in drone technology, deep learning, and computer vision. This research explores using near-surface hyperspectral sensors as an alternative to red, green, blue (RGB) digital cameras, in hopes of improving detection precision and accuracy for airfield assessment. However, even with modern hyperspectral sensors the benefit of increasing spectral image resolution comes at a cost, creating addition complexity, uncertainty, and sensitivity in the acquisition, data correction, and downstream detection processes. &#13;
&#13;
This work presents a series of tests, each designed to better understand and refine a full hyperspectral image detection sequence, starting with sensor selection and raw data acquisition, proceeding to radiometric correction, and culminating in image recognition by means of supervised deep learning (DL). Regarding sensor selection and data acquisition, these findings indicate that for many applications of computer vision, using a hyperspectral camera with high spectral resolution is unnecessary. It is more beneficial to select a camera with snapshot imaging that instead maximizes spectral range or spatial resolution. Radiometric correction is then explored, and experiments demonstrate that correction makes machine learning classification models less sensitive to changes in scene illumination, thus improving overall image recognition performance. Finally, deep learning models for image recognition are tested and a new method for generating synthetic hyperspectral data is developed and shown to be useful for estimating hyperspectral model performance on larger datasets, when real data are limited. Overall, the findings presented in this thesis suggest that by refining the methods used for data acquisition, correction, and detection, hyperspectral imaging improves image recognition when compared to traditional RGB cameras. This applies not only for airfield damage assessment but extends to other real-world applications requiring computer vision and scene understanding.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162411</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Measurement Approaches to the Study of Secondary Organic Aerosol</title>
<link>https://hdl.handle.net/1721.1/162410</link>
<description>New Measurement Approaches to the Study of Secondary Organic Aerosol
Helstrom, Erik
Aerosol particles constitute a class of atmospheric pollutants that are detrimental to human health and influence the Earth’s climate. A significant fraction of aerosol mass is composed of organic material, produced by photochemical reactions of organic trace gases which form secondary organic aerosol (SOA). However, the large diversity of volatile organic compounds (VOCs) makes it challenging to identify all of the chemical reactions contributing to SOA formation. In addition to this chemical complexity, our ability to identify and measure all of the relevant organic compounds, especially species present in aerosol particles, is limited by challenges in efficiently sampling and detecting the various classes of molecules formed. Improving knowledge of the chemical behavior of aerosol will improve our ability to predict how changing emissions and chemical conditions will impact the formation and properties of particulate matter in the future.&#13;
This thesis will explore recent improvements in instrumentation and measurement techniques and apply them to laboratory studies of organic carbon and SOA. First, we adapt a technique for measuring total suspended carbon to laboratory chamber experiments, converting organic compounds with high temperature catalysis to carbon dioxide, which is then monitored in real time. This allows for a “top-down” constraint on the overall concentration of all organic species (including SOA) as experiments proceed, as some lower volatility products are lost to the surfaces of the laboratory chamber. Second, we compare the measurements of SOA from three chemical ionization mass spectrometers using different ionization and desorption methods to detect particle-phase species. Clear differences emerge in the detected formulas across instruments, highlighting variations in chemical sensitivities to different classes of compounds and the influence of fragmentation on the detected products. Finally, we explore how changing peroxy radical fate influences SOA formation by monitoring SOA composition with extractive electrospray ionization (EESI) mass spectrometry. Differences in particle-phase products, particularly nitrates, hydroperoxides, and dimers, make the dependence of initial SOA composition on peroxy radical pathways clear. Over time, we observe a convergence of SOA spectra formed under different peroxy radical regimes, suggesting the influence of secondary products and particle-phase chemistry, though some differences persist from the initial gas-phase peroxy radical fate. Overall, this thesis demonstrates improved tools for constraining and investigating VOC oxidation pathways leading to particle-phase organic species.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162410</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of Recycling Stream Contaminants on Extrusion and Mechanical Properties of rPET</title>
<link>https://hdl.handle.net/1721.1/162409</link>
<description>Effect of Recycling Stream Contaminants on Extrusion and Mechanical Properties of rPET
Escandon, Mercedes
The emerging field of large-scale additive manufacturing has a wide range of applications including the potential to use recycled plastics. A type of 3D printing that can use recycled plastics is Fused Granulate Fabrication, which takes in polymer pellets and melts them down in a screw and barrel before extruding them layer by layer. The process of turning plastic trash into print-ready pellets typically involves sorting, shredding, washing, drying, and re-extruding the material into pellets, each step requiring time, equipment, and energy. To reduce both the cost and carbon footprint of 3D printing with recycled plastic, some of these steps could potentially be eliminated. Washing and drying plastic is a very energy-intensive step that could potentially be skipped or significantly modified. The possibility of using unwashed plastics in 3D printing was explored, focusing on residual beverage contamination.&#13;
&#13;
Controlled amounts of soda were introduced to clean, virgin PET pellets prior to drying. The contamination levels ranged from 0.25% to 3% soda by mass. The plastic was melted and extruded into thin strands using a stationary horizontal extruder and a conveyor belt. The mass flow rate was determined from the mass of strands to quantify the quality of extrusion. No significant effect of contamination level on mass flow rate was measured. The strength of the parts was determined using a tensile test. The Young’s Modulus initially increases with the contamination level, peaking at 1% contamination with a Young’s Modulus of 2.80 GPa, which was 30% higher than the measured value for the clean PET. Above 1% contamination, there was a significant drop off strength. These results demonstrate that there is an acceptable level of beverage contamination when recycling plastic.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162409</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring Density of Novel Technorganic Material</title>
<link>https://hdl.handle.net/1721.1/162408</link>
<description>Measuring Density of Novel Technorganic Material
Ramos-Munoz, Jorge Felix
Self-healing technorganic materials are critical for organ printing/repair operations. This study characterized one such technorganic material by measuring its density. By ascertaining the density of the material, we hope to later derive other important properties including: the speed of sound within the material, its stiffness, its conductivity, and its natural frequency. The material in question was printed on a gold substrate by depositing hexane within a silver nitrate solution. The density was obtained by observing the volume change of the solution in which it was printed and by measuring the sample’s mass. A final density of 0.622±pm 0.227 g/cm³ was measured, and we hope to continue characterizing the material’s mechanical, thermal, and electrical properties with future studies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162408</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Testing of a Hadal Sediment Sampling System</title>
<link>https://hdl.handle.net/1721.1/162407</link>
<description>Design and Testing of a Hadal Sediment Sampling System
Do, Thao X.
Sediment cores are conventionally gathered to collect data on seabed chemical and mineral composition. These samples are important in determining the health of the surrounding environment and, if sediment layers are preserved, assessing the environmental history and trajectory of our ocean floors. Over the last few decades, some attempts to collect sediment core samples have been made from as deep as the Mariana Trench, which contains the deepest known point on Earth’s surface at 11,000 meters [1] [2]. However, the hadal region remains one of the most underexplored areas of our oceans [3].&#13;
Working in conjunction with Inkfish, a submersible technology company, we developed a deep-sea sediment core sampler that will travel to the Mariana Trench aboard one of Inkfish’s submersible landers and will collect four inches of sediment in ambient pressures of 110.32 MPa [4].&#13;
The first phase of this project was to design and fabricate a prototype sediment core sampling device. We designed an entirely mechanical sampler because underwater actuators suitable for use in hadal conditions are difficult to source and require additional communication and power resources. Furthermore, our device aims to preserve the layers of the sediment core samples as the lander ascends to preserve relative time scales within the sample.&#13;
The next phase involved testing the functionality of different subsystems of our device. In this paper, we considered two different sediment collection apparatuses and one-way valves for our collection tube. We performed field testing at the Charles River in Cambridge, MA to assess which apparatus and valve combination would provide the best results based on the volume of sediment collected and retained.&#13;
Building on our mechanical proof-of-concept, we improved the design and fabricated a second iteration prototype for 2-3 km depths. In this version, we addressed the two main weaknesses of the initial proof of concept design: substantial friction during sliding and the complexity of assembly and maintenance due to the numerous parts.&#13;
Lastly, we engineered a deployment system that made our device compatible with Inkfish’s lander deployment procedure. The deployment system was designed to lower the sediment sampler device from within the bay to below the lander once the lander was submerged in water.&#13;
In early October, the improved sediment sampler and the deployment system were tested during an engineering expedition near the shores of Tonga. Ultimately, we believe that our sediment sampler presents a viable purely mechanical solution to collecting deep-sea sediment from profoundly unexplored areas at hadal depths like the Mariana Trench. Our sampler can easily be mounted onto any surface where it would touch the ocean floor, requiring no electronics or controls. Though we were constrained by the particular seafloor lander used by Inkfish, the size of the sampler is scalable, allowing both the sample diameter and depth to be adjusted for a given mission. By making these sediment samples more accessible, we believe we will have an impact across a number of marine research areas.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162407</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redefining “Space Sustainability” for Launch Vehicles: Forecasting the Atmospheric Impact of the Commercial Space Launch Industry in 2050</title>
<link>https://hdl.handle.net/1721.1/162406</link>
<description>Redefining “Space Sustainability” for Launch Vehicles: Forecasting the Atmospheric Impact of the Commercial Space Launch Industry in 2050
Ma, Clara Z.
Discussions on “space sustainability” have largely centered on orbital debris, the burnup of vehicles during atmospheric reentry, and the resulting emissions. However, few studies have examined emissions from the launches themselves. Along with reentry burnup, rocket launches are the only source of high altitude anthropogenic emissions. At such high altitudes, emitted particles can remain in circulation for years. With the annual growth rate of the commercial launch industry averaging 14.6% in the last 4 years and over 211 launches in 2023 alone, our research on the atmospheric impact of launch vehicles comes at a crucial point in the policy debate on space sustainability.&#13;
&#13;
This thesis outlines several potential future scenarios of the launch industry in 2050, with all the vehicles in each scenario using the same fuel type. We examine these four launch scenarios—a kerosene (RP-1) launch industry, a methane (CH4) launch industry, a hydrogen (H2) launch industry, and a control or “baseline” scenario without launches. For each scenario, we estimate the number of launches for a distribution of heavy-lift launch vehicles across origin spaceports. We simulate the chemical interactions of the launch plumes with the atmosphere using the global atmospheric chemistry model GEOS-Chem High Performance (GCHP). Finally, we quantify the steady state impact of launch emissions on stratospheric ozone and surface air quality.&#13;
&#13;
We find that the black carbon emitted by kerosene and methane rockets causes an indirect increase in stratospheric ozone due to the removal of NOx, with ozone column change averaging 5.07 Dobson Units (DU) and 1.26 DU respectively; hydrogen rockets cause a net decrease in ozone column averaging -0.11 DU. The population-weighted average surface ozone impact is -0.286 ppb, -0.068 ppb, and 0.023 ppb for RP-1 rockets, CH4 rockets, and H2 rockets respectively. The population-weighted average surface PM2.5 impact is -0.031 μg/m3, -0.004 μg/m3, and 0.002 μg/m3 for RP-1, CH4, and H2 rockets respectively. Although RP-1 and CH4 rockets decrease surface ozone and surface PM2.5, H2 rockets have the smallest magnitude impacts on the atmosphere overall. Our findings have important implications for commercial launch providers, research institutions, and policymakers including the Federal Aviation Administration (FAA) and NASA.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162406</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Principles for Scalable Connectomics</title>
<link>https://hdl.handle.net/1721.1/162405</link>
<description>Engineering Principles for Scalable Connectomics
Garzon Navarro, Monserrate
Brain tissue sectioning presents a significant challenge in connectomics, particularly when scaling to larger volumes. In the MICrONS 1 mm³ mouse visual cortex dataset, 25.1% of scanned images—representing over a month of imaging work—were discarded due to sectioning defects. Current methods result in material loss during cutting and face limitations in tool wear and process efficiency. This thesis examines tissue sectioning through an engineering lens. Drawing from established machining practices and parallel industries, we propose and evaluate potential improvements to sectioning methods. The work aims to contribute to ongoing efforts in mapping larger connectomes, making it more practical and less error-prone.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162405</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI Trust and Technology Optimism in the Workforce: Data-Driven Insights into Regional Variation</title>
<link>https://hdl.handle.net/1721.1/162404</link>
<description>AI Trust and Technology Optimism in the Workforce: Data-Driven Insights into Regional Variation
Velonia Bellonia, Maria Eleni
Automation and AI systems are reshaping the workplace. How these technologies make a difference varies according to local contexts. Workers’ willingness to trust and embrace these technologies is shaping how this transformation unfolds in practice. Some workers trust AI more than others, and interestingly, trust levels differ from one region to another. Drawing on a far-reaching 2024 worker survey spanning different countries, and on a rich body of literature on technology, trust, and change, this work examines how key factors influencing workers’ AI trust and technology optimism interweave, shaping their perspectives on new technologies and automation. The focus is on understanding how the industrial and regulatory landscape in which workers operate, combined with their personal experiences with AI, shapes their AI optimism, with a particular emphasis on the US and Europe. While external market innovation indicators provide limited understanding of workers’ technology optimism, individual interaction and familiarity with AI, alongside organizational AI adoption and a worker’s industry of employment, emerge as key factors shaping AI trust. Additionally, the regulatory environment, encompassing technology governance, social safety nets, and workers’ institutional trust, all seem connected with how workers think about the impact of new technologies on society, the economy, and their jobs. Interpersonal trust propensity contributes to AI trust formation, though its relevance exhibits regional variation. By offering insights into the critical factors shaping the relationship between workers and AI, this study aims to provide evidence that supports societies in unlocking the value of emerging technologies, while empowering the workforce to confidently embrace and excel alongside them.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162404</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ideology and the First Gothics</title>
<link>https://hdl.handle.net/1721.1/162403</link>
<description>Ideology and the First Gothics
Grove, Allen Whitlock
Thesis: B.S., Massachusetts Institute of Technology, Literature
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162403</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational methods for dissecting multicellular mechanisms of complex diseases</title>
<link>https://hdl.handle.net/1721.1/162335</link>
<description>Computational methods for dissecting multicellular mechanisms of complex diseases
Mitchel, Jonathan
Single-cell genomics technologies have enabled unbiased characterization of cell types and cellular states. However, the high-dimensional nature of this data necessitates computational and statistical methods to uncover the biological processes that shape it. In my thesis research, I developed three computational methods to explore genetic regulatory mechanisms underlying common diseases and the resulting multicellular patterns of dysfunction. In the first project, I developed a method called scITD to investigate how cellular processes across distinct cell types coordinate in disease contexts. scITD identifies sets of genes in one or more cell types that co-vary together across biological samples. Through the application of this tool to various immune-cell datasets, we uncovered highly reproducible gene expression patterns associated with autoimmune patient phenotypes. In the second project, I characterized technical artifacts prevalent in imaging-based spatial transcriptomics data. These artifacts arise from the misassignment of transcript molecules to incorrect cells. I further demonstrated how these artifacts confound downstream analyses, including differential expression and cell-cell interaction inference. To address this, I jointly developed a correction method that mitigates these artifacts, thereby uncovering novel biological insights in cancer datasets. In the third project, I introduced a computational method to unravel the mechanisms of genetic variants identified from genome-wide association study loci. This method tests whether these same genetic variants also underly changes to gene expression in specific cell types or states. Applying this tool to autoimmune and neurodegenerative datasets uncovered new SNP-gene-phenotype links and localized their effects to specific cell populations, helping to refine our understanding of these pathologies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162335</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Packaging and Integration Solutions for Next-Generation Photonic Systems</title>
<link>https://hdl.handle.net/1721.1/162334</link>
<description>Scalable Packaging and Integration Solutions for Next-Generation Photonic Systems
Ranno, Luigi
The ever-increasing demand for faster and more efficient computation has propelled the rapid growth of integrated photonics, with commercial products starting to reach global markets in recent years. Nevertheless, integrated photonics still lacks the scale required to meet market demands and falls short of the performance targets necessary for many critical applications. Innovative solutions are imperative if photonics is to drive technological advancement and become a ubiquitous part of next-generation systems, rather than being confined to niche or high-end applications.&#13;
Among the key bottlenecks is photonics packaging, which refers to the challenge of electrically, optically, and thermally interfacing with a photonic integrated circuit (PIC). Current packaging solutions often impose significant design tradeoffs, contributing to industry fragmentation and high costs. Two-photon lithography (TPL), a high-resolution 3D manufacturing technique, has emerged as a promising enabler of robust and efficient optical interconnects. However, existing research has focused heavily on performance, often relying on additional chip processing steps (e.g., cladding removal) that hinder scalability. Moreover, prior work largely restricts itself to parameterized geometries, such as quadratic curves or spherical sections, that underutilize the true design freedom of TPL. My work addresses both of these limitations. I developed a freeform, facet-attached micro-reflector solution that is fully compatible with standard foundry processes, adaptable to challenging coupling scenarios, and computationally efficient to design. This coupling solution demonstrates all the properties desired in an ideal optical interface: low insertion loss (~0.6 dB), wide bandwidth (&gt;300 nm), foundry compatibility, and geometric universality across PIC platforms.&#13;
Another major challenge facing the photonics industry is the lack of critical functionalities within current foundry processes due to limited material availability. Significant gains in performance and capability can be realized by integrating new materials on-chip, but doing so while maintaining CMOS-foundry compatibility remains a formidable task. To address this, I helped develop a novel photonics platform enabling substrate-inverted multi-material integration. This platform supports seamless integration of diverse materials while leveraging existing PIC process stacks, including metallization layers, unlocking new classes of high-performance devices. Building on this idea, I further demonstrated how material integration can directly enable new applications. Specifically, I developed a selective and ultra-sensitive environmental lead (Pb²⁺) sensor, based on a crown ether functionalization layer. This device showcases the potential of hybrid material platforms to deliver practical, field-relevant solutions in environmental monitoring and beyond.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162334</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic and Post-Synthetic Methods towards Fine Tuning&#13;
the Chemical and Physical Properties of Metal-Organic Frameworks</title>
<link>https://hdl.handle.net/1721.1/162333</link>
<description>Synthetic and Post-Synthetic Methods towards Fine Tuning&#13;
the Chemical and Physical Properties of Metal-Organic Frameworks
Iliescu, Andrei
This thesis explores synthetic and post-synthetic strategies for tailoring the chemical and physical properties of metal-organic frameworks (MOFs), with a particular emphasis on modulating redox activity, framework composition, and ionic conductivity. The first part of the work focuses on leveraging MOF-embedded polynuclear metal clusters for multi-electron redox chemistry. A square-planar tetramanganese cluster was shown to reversibly interconvert between molecular oxygen and metal-oxo species via a four-electron pathway. This reactivity was then investigated by varying the identity and redox potential of the metal centers within the tetrametal cluster. The Fe(II) and Co(II) analogs reveal distinct metal-specific behavior and provide insight into the tunability of redox-active SBUs within MOFs. Next, post-synthetic cation exchange was employed to access a previously unreported Zn-based MOF, ZnZnBTT, which exhibits significant Zn-ion conductivity due to mobile charge-balancing cations. This material demonstrates the potential of MOFs in next-generation solid-state battery technologies. Finally, the impact of linker electron donicity on cluster structure and reactivity was explored using a new mixed-azolate ligand. Four isostructural MOFs incorporating Co, Ni, Cu, and Cd were synthesized, revealing that the electron-rich pyrazolate groups modulate cluster composition and redox behavior. Notably, CoBTDP exhibits O₂ reactivity, unlike its all-tetrazolate counterpart, underscoring the role of linker design in tuning MOF function. Together, these studies demonstrate how careful control over MOF synthesis and post-synthetic modification can be used to fine-tune redox behavior, framework composition, and ion transport, providing new avenues for the design of functional porous materials.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162333</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Metal Complexes for Optical Read-out of Magnetic Fields</title>
<link>https://hdl.handle.net/1721.1/162332</link>
<description>Leveraging Metal Complexes for Optical Read-out of Magnetic Fields
Yi, Seungyeon
Optical detection of magnetic phenomena offers a compelling pathway toward the development of highly sensitive and versatile molecular sensors. This thesis investigates the design of metal complexes tailored for magnetic field read-out through light–matter interactions, focusing on two strategies. The first section explores magnetochiral dichroism (MChD), an optical effect that emerges from the interplay between molecular chirality and magnetism. By systematically varying the metal centers within a series of chiral lanthanide complexes—specifically, Tb³⁺ and Dy³⁺—we examine how differences in magnetic moment modulate the MChD response. This comparative study reveals fundamental chemical design principles for enhancing MChD intensity and deepens our understanding of how structural and electronic factors jointly shape this directional optical effect. The second section addresses the challenge of engineering optically addressable molecular qubits based on Ni²⁺ complexes. Realizing effective spin-state read-out in these systems requires precise control over both magnetic and photophysical properties. To this end, we investigate ligand modification strategies aimed at enhancing luminescence while preserving an S = 1 ground state suitable for quantum applications. Collectively, we hope these studies contribute to a better understanding of the design space for spin–photon coupled molecular systems, offering new tools for magnetooptical sensing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162332</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Plasticity and Temperature in High Velocity Microparticle Impacts</title>
<link>https://hdl.handle.net/1721.1/162331</link>
<description>Quantifying Plasticity and Temperature in High Velocity Microparticle Impacts
Lucas, Tyler J.
The Laser Induced Particle Impact Test, or LIPIT, is a benchtop experimental setup that enables in-situ observation of micron-scale particles impacting targets at velocities ~10-1500 m/s. Through a combination of the high velocity and small length-scale of the impact, strain rates exceeding 10⁷ /s can be achieved while maintaining a subsonic plastic wave, preventing formation of strong shockwaves and hydrodynamic behavior. The LIPIT has been effectively applied to study phenomena in mechanical behavior, cold spray, and astronomical impacts, all of which will be further investigated in this work. This thesis combines LIPIT experiments and finite element modeling to explore the dynamic behavior of pure metals in the unique regime of strain, strain rate, and pressure achieved in high velocity impact conditions. First, the effect of material microstructure on the mechanical behavior of copper at high strain rates is explored to improve the capability of constitutive strength models in accurately representing experiments. Next, a method is introduced to measure the dynamic yield strength of ductile microparticles, effectively removing the need for the tacit assumption that the properties of bulk materials can be imposed on powders despite differences in processing. The understanding of microstructure and particle behavior are then combined to study the influence of material microstructure on the solid-state bonding of copper particles to copper substrates of different temper. Finally, this work applies the new understanding of plasticity and dynamic modeling in high strain rate conditions to quantitatively study the behavior of metals with a phase transition in absence of strong shockwaves.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162331</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endothelial cell plasticity as a marker of vascular disease and&#13;
predictor of adverse outcomes to stress</title>
<link>https://hdl.handle.net/1721.1/162330</link>
<description>Endothelial cell plasticity as a marker of vascular disease and&#13;
predictor of adverse outcomes to stress
Salazar Martín, Antonio Gabino
Endothelial cells (ECs) dynamically sense and adapt to their local biomechanical and biochemical environments, a process crucial for maintaining vascular homeostasis. Loss of this plasticity is implicated in vascular diseases, where endothelial dysfunction and maladaptive responses exacerbate disease progression and limit therapeutic efficacy. We investigated the role of endothelial cell plasticity under pathophysiological conditions and its impact on therapeutic interventions – mechanical, pharmacologic and genetic. Specifically, Aim 1 characterizes the modulation of EC plasticity by shear stress, revealing that flow patterns drive distinct transcriptional signatures and subpopulation behaviors, as demonstrated through singlecell transcriptomics in human aortic endothelial cells. Aim 2 examines the interplay between EC dynamism and antiproliferative drugs, in particular rapamycin and paclitaxel – the agents released from drug-eluting stents, showing that biomechanical cues from flow dominate EC responses, potentially limiting drug efficacy in regions of disturbed flow. Aim 3, extends the investigation by moving beyond the overwhelming of cells with pharmacologic dosing into the domain of controlled genetic modification, which is in concert with the direction of modern therapeutics and also provides a further dimension to the perspective of endothelial biology. We sought to discern if genetically modified cells maintain their characteristic "endothelial" profile or if the interplay among genomic alterations, transcriptional and proteomic shifts, and environmental cues leads to a state that challenges the hypothesis that ECs remain plastic until they become committed to flow. The integration of single-cell transcriptomics and in vivo models provides novel insights into the heterogeneity of endothelial responses and underscores the importance of considering biomechanical and biological factors in developing targeted therapeutic strategies for vascular diseases.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162330</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Ridership and Travel Time Impacts of Bus Service Changes Using Sketch Planning Methods</title>
<link>https://hdl.handle.net/1721.1/162329</link>
<description>Predicting Ridership and Travel Time Impacts of Bus Service Changes Using Sketch Planning Methods
Lim, Tiffany M.
Bus service changes range in scale, and understanding their impacts on ridership and travel times can inform decision-making as changes are considered for the bus network. Budgetary limitations are at the heart of service change decisions, resulting in the need for analysts to assess different scenarios and accommodate quick turnarounds. This thesis provides a sketch planning framework for predicting ridership and travel time impacts of bus service changes, with a focus on direct demand models and the use of an open-source multimodal routing algorithm. The framework is designed to be streamlined with the use of data sources and capabilities, such as exporting a General Transit Feed Specification (GTFS) feed of a given bus network scenario, that agencies may have access to through existing transit planning tools.&#13;
&#13;
Direct demand models are developed to estimate bus ridership at the level of approximately one-mile route-segments and time-of-day periods. This level of analysis provides a more disaggregated evaluation of bus ridership than past direct demand models. The models are sensitive to both route and network improvements. New variables designed to capture the relationship between bus routes, including the competitive and complementary nature of routes, are introduced and incorporated in the model development process. These models are developed for the Washington Metropolitan Area Transit Authority (WMATA). A case study analyzing two scenarios in WMATA's Better Bus Network Redesign (BBNR) is presented, with selected route examples to illustrate how the models capture different types of service changes. These routes fall under three categories: routes with no major service changes, routes with improvements in frequency, and routes with re-routing and other improvements.&#13;
&#13;
An open-source multimodal routing algorithm, available through an R package called r5r, is used for travel time analysis. r5r calculates a distribution of door-to-door travel times for a given origin-destination (OD) matrix and returns a selected percentile value from the distribution for each OD pair. The percentile parameter is calibrated through a comparison of estimated travel times and actual travel times recorded in origin-destination-interchange inference (ODX) data. Low percentile values were found to provide travel times close to actual travel times. Additional guidance is provided for interpreting travel times from r5r, and use cases related to calculating travel time impacts between scenarios and evaluating rail competitiveness for a given bus network are explored.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162329</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensing and Predicting Urban Rail Platform Crowding Using Emerging Data Sources</title>
<link>https://hdl.handle.net/1721.1/162328</link>
<description>Sensing and Predicting Urban Rail Platform Crowding Using Emerging Data Sources
Fiorista, Riccardo
Rail platform crowding poses serious challenges to passenger safety, operational performance, and service quality in urban rail transit systems. This thesis investigates the short-term forecasting of platform-level crowding, focusing on enhancing prediction accuracy, spatial granularity, and operational interpretability through multi-source data integration. We first employ a gradient-boosted tree regression model (LightGBM) to leverage fare card transaction, vehicle location, weather, and public event data from the Washington Metropolitan Area Transit Authority (WMATA) to forecast platform-level occupancies 15–60 minutes ahead of time. Our results show significant improvements over a WMATA-internal baseline while providing a robust data preparation and prediction pipeline. Subsequently, we explore integrating platform-level CCTV data to overcome the lack of real-time crowding estimates. Using a custom-collected image dataset and three computer vision methods, namely object detection (YOLOv11, RT-DETRv2) and head counting (APGCC), crowd-level classification (Crowd-ViT), and semantic image segmentation (DeepLabV3), we demonstrate that estimated counts from calibrated image segmentation maps enable accurate real-time estimation of platform crowding. Additionally, we show that these estimates can correct and improve 15-minute horizon predictions when incorporated with a stochastic gradient-boosted tree learner such as LightGBMLSS. Finally, we extend the time series modeling framework by incorporating network-wide causal influences through an analysis driven by Empirical Dynamic Modeling and Convergent Cross Mapping. We show that accounting for network effects improves predictive performance, particularly for platforms characterized by regular low-occupancy patterns, improving the prediction of anomalies. The work presented in this thesis extends the existing literature on short-term platform crowding prediction, offering new methodologies to incorporate emerging CCTV data and causal network effects for increased prediction accuracy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162328</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Thinking as an Analytical Lens for Bilateral International Development: Lessons from the Harbor Reconstruction Project in Jamestown, Accra</title>
<link>https://hdl.handle.net/1721.1/162327</link>
<description>Spatial Thinking as an Analytical Lens for Bilateral International Development: Lessons from the Harbor Reconstruction Project in Jamestown, Accra
Avis, Victoria
This thesis examines transcalar tensions that emerge from urban infrastructure development projects funded through bilateral foreign assistance mechanisms. Using a mixed-methods case study approach to gather data from a wide variety of historical and contemporary primary and secondary sources, this research centers a harbor revitalization and port reconstruction project in Jamestown, a historic fishing community in Accra, Ghana. Having coordinated plans with the Ghanaian national government, a Chinese state-owned construction firm began working on the port in 2020. In 2024, the revitalized harbor and expanded port were officially handed over to the government of Ghana in a widely attended ceremony. The spatial implications of this physical urban infrastructure project across international, national, municipal, and local levels are complex and interrelated. Therefore, this case study is especially relevant at a historical moment when the nature of bilateral engagement may be undergoing significant transformation. &#13;
&#13;
This thesis argues that spatial thinking, a foundational concept in urban planning, is a necessary analytical lens to incorporate within international development practice. Despite its relevance, spatial thinking has not been meaningfully incorporated into international development policy or implementation. Therefore, this thesis seeks to bridge epistemic gaps between urban planning and international development by advancing a spatial thinking framework, adapted for use in international development contexts. In doing so, this thesis envisions a future for bilateral development assistance that delivers equitable and sustainable development outcomes across scales of engagement. This approach, rooted in spatial thinking, intends to respond to local community needs and aspirations, capacitate municipal governments, align with national priorities, and accommodate geopolitical dynamics that facilitate bilateral project implementation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162327</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Order Under Pressure: Structural and Magnetic Characterization at Extreme Stresses</title>
<link>https://hdl.handle.net/1721.1/162326</link>
<description>Order Under Pressure: Structural and Magnetic Characterization at Extreme Stresses
Riesel, Eric Alan
Mechanical stress is an exquisitely versatile tool for controlling chemical bonding. This multi-dimensional synthetic lever tunes the electronic structure of elements and changes the way that atoms arrange and coordinate to one another. These unique electronic configurations and coordination environments have profound impacts on the properties of materials giving rise to functionality ranging from high-temperature superconductivity to diverse magnetism. Despite over a century of research on solid-state materials over one gigapascal (GPa), experimental and theoretical obstacles remain for structural and physical characterization of complex phases which only persist at these conditions. We begin to address the wide-reaching challenge of structural characterization in complex, bulky sample environments by employing recent advancements in generative artificial intelligence to develop a generalized approach to solving the structure of crystalline solid-state materials. We demonstrate that our model achieves a 42% match rate on a curated set of experimental powder diffraction patterns, and we then use our model to solve the structure of several previously unsolved structures at high pressure. We proceed to focus on a different structural characterization problem: defects which arise exclusively under mechanical stress. We demonstrate that site-disorder is unlikely to occur at room temperature and high pressure in InBi and instead propose a set of defects which explain the X-ray spectra and scattering patterns equally well. Progressing to properties characterization and magnetic ordering at high pressure, we experimentally demonstrate that MnBi2, a compound which does not persist to ambient pressure, is a permanent magnet. Comparing the orbital and spin contributions to the total moment across compounds in the Mn–Bi system, we build up design principles for permanent magnets using heavy main-group elements. The combination of our work in structural and physical characterization at extreme stresses charts a path towards the discovery of functional high-pressure bulk materials and defects.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162326</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent Dynamic Nuclear Polarization: Mechanistic Insights and Experimental Advances</title>
<link>https://hdl.handle.net/1721.1/162325</link>
<description>Coherent Dynamic Nuclear Polarization: Mechanistic Insights and Experimental Advances
Ouyang, Yifu
Dynamic Nuclear Polarization (DNP) enhances the sensitivity of solid-state Nuclear Magnetic Resonance (NMR) by transferring polarization from electrons to nuclei. While traditional continuous-wave (CW) DNP has advanced through improved radical design, the development of pulsed DNP—employing short, high-power microwave bursts—has shown the advantages of coherent spin control. We present both theoretical and experimental investigations aimed at understanding and optimizing polarization transfer. On the theoretical side, we examined multiple DNP mechanisms, including a re-evaluation of the Overhauser effect in insulating solids and a foundational treatment of the chirped solid effect. We also identified a new transfer channel, termed Resonant Mixing, arising from interference effects under off-resonance driving. Building on these insights, we developed a general framework for analyzing amplitude-, phase-, and frequency-modulated pulses. This approach enables the design of hybrid pulse sequences that combine modulation and chirping to produce efficient, selective spin transfer. These sequences maintain high enhancement even at reduced microwave power, thereby improving scalability to high magnetic fields. To test the practical viability of this approach, we designed and evaluated a prototype 400 MHz/263 GHz probe incorporating new resonator and RF technologies. While the initial performance was limited, the system provided a testbed for future high-field pulsed DNP experiments under realistic conditions. Together, these results establish a theoretical and technical foundation for next-generation pulsed DNP, emphasizing coherent spin manipulation, power-efficient design, and applicability to high-field, static-solid NMR systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162325</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Quantitative Solid-State NMR Methods&#13;
to Characterize Membrane Proteins</title>
<link>https://hdl.handle.net/1721.1/162324</link>
<description>Development of Quantitative Solid-State NMR Methods&#13;
to Characterize Membrane Proteins
Somberg, Noah H.
Membrane proteins are critical components of all cells and viruses. While about one quarter of human proteins are membrane proteins, they constitute half of all drug targets. Despite their importance, membrane proteins are underrepresented among known protein structures, constituting only about 2 percent of the Protein Data Bank. This discrepancy is due to the unique difficulties in studying membrane proteins, which make many techniques commonly used in structural biology extremely challenging. Membrane proteins often display a structural dependence on the local environment. It is therefore essential to have structural biology tools to study these critical proteins in native-like environments. Solid-state Nuclear Magnetic Resonance (NMR) spectroscopy provides one of the few methods available to study the structure and dynamics of membrane proteins directly in the lipid bilayer. Herein, practical and theoretical considerations of dipolar and chemical shift anisotropy recoupling experiments are presented. These experiments were applied to the study of membrane proteins. New experiments and novel analysis techniques were developed, and the results guided biophysical understanding and drug development. &#13;
&#13;
Among the membrane proteins of SARS-CoV-2, the Envelope (E) protein is the least understood. E forms a membrane-bound ion channel and is associated with inducing the respiratory symptoms of the disease. The exact oligomeric state of E was not known. The fluorine centerband-only detection of exchange (CODEX) experiment was employed to directly measure the oligomeric state of E in lipid bilayers. The transmembrane domain of E forms a pentamer, while a construct including the ectodomain forms a dimer. Under certain conditions, the pentamers cluster together, forming supramolecular assemblies that may have a unique role in the virus life cycle. &#13;
&#13;
New sensitivity-enhanced carbon-fluorine rotational-echo double-resonance (REDOR) experiments are developed and used to investigate the drug binding of E. The small molecule drug hexamethylene amiloride binds to E at the protein-lipid interface. This informed the development of higher affinity inhibitors, which were also shown to bind E at the lipid interface. A novel strategy to identify ligand binding sites of proteins without sequential resonance assignment is presented. The technique uses a computationally efficient second moment approximation to calculate REDOR dephasing, and simulated annealing to explore the associated parameter space.&#13;
&#13;
The new methods and advances in quantification and simulation of the REDOR and CODEX experiments enhance the available solid-state NMR toolkit for the study of critical membrane proteins.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162324</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving the reliability of optical phase change materials-based devices</title>
<link>https://hdl.handle.net/1721.1/162323</link>
<description>Improving the reliability of optical phase change materials-based devices
Popescu, Cosmin-Constantin
Optical components are part of our daily lives, including vision and camera systems, data transmission in telecommunication, sensing applications, manufacturing and medicine, and more. Compact on-chip integrated optics such as photonic integrated circuits and optical metasurfaces can provide us with the desired functionality, but there is a continuous need for active non-volatile tuning capabilities of these devices. &#13;
Chalcogenide optical phase change materials (PCM) (e.g. Ge₂Sb₂Te₅) have gathered sustained interest in the past several years in the photonics community exactly due to their potential for non-volatile control of optical signals. Prior work had showcased the integration of PCMs via free-space metal heaters for metasurfaces, demonstrating switching for several tens of cycles. To understand the limiting mechanisms preventing extended cycling of such devices, we have developed a near IR transparent platform on doped silicon-on-insulator for testing both material behavior and device performance, along with the auxiliary code and design needed for such testing. Using this platform, a Ge₂Sb₂Se₄Te-based transmissive metasurface filter was demonstrated with a cycling performance of 1250 cycles. Following, the mechanisms limiting the performance of such devices were explored, providing guidelines to improve their reliability and endurance both at the phase change material scale as well as the accompanying device level. Furthermore, we showcase future potential devices that can be leveraged for PCM photonics, including a theoretical design that avoids free carrier absorption losses from the doped silicon heater by placing the dopants at the node of a resonant mode, limiting their overlap with the regions of high field amplitude, and a matrix array of heaters for higher device functionality. Finally, we point towards areas to focus in order to scale these concepts to commercial applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162323</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Medicine in Diabetes Using Continuous Glucose Monitoring</title>
<link>https://hdl.handle.net/1721.1/162322</link>
<description>Precision Medicine in Diabetes Using Continuous Glucose Monitoring
Healey, Elizabeth
Diabetes affects millions of individuals around the world and is a leading cause of death. The risk of serious long-term complications in diabetes can be mitigated through early interventions in the form of medication and behavioral changes. However, the pathophysiology of diabetes and the course of the disease are markedly heterogeneous, making it essential that disease management is tailored to the individual. Continuous glucose monitoring (CGM) helps patients manage their disease through the collection of real-time measurements of interstitial glucose, providing insight into glycemic dynamics that laboratory measurements cannot capture. In this thesis, we investigate how CGM can be used to enable personalized disease management in diabetes using modern methods from machine learning and signal processing. We first investigate a model-based approach to estimate metabolic parameters from CGM data. We show that parameters estimated from daily CGM data correlate with parameters derived from in-clinic laboratory measurements. Then, we explore how&#13;
the rapidly emerging field of generative artificial intelligence can be integrated into diabetes care through analysis of CGM data. We show how large language model agents hold promising potential to assist patients and clinicians in managing diabetes through the synthesis and narrative summarization of large amounts of CGM data. Finally, we leverage observational CGM data to understand heterogeneity in type 2 diabetes. The work in this thesis shows how modern computational methods in machine learning can enable precision medicine in diabetes by leveraging wearable CGM data for improved disease management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162322</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Insights into Mycobacteriales Galactan Biosynthesis</title>
<link>https://hdl.handle.net/1721.1/162321</link>
<description>Structural Insights into Mycobacteriales Galactan Biosynthesis
Carter, Alan Wylde
The order Mycobacteriales includes a number of severe human pathogens, including Mycobacterium tuberculosis, the causative agent of tuberculosis and a leading cause of infectious disease-related mortality worldwide. The unique cell wall structure of these bacteria is essential for their viability, and has been studied as a potential target for novel therapeutics development. A key component of the mycobacterial cell wall is the galactan, a 30-40 residue linear polysaccharide of galactofuranose (Galf) with an alternating β(1,5) and β(1,6) linkage pattern, synthesized by the polymerase Galactofuranosyl Transferase 2 (GlfT2). While GlfT2 has been established as a processive polymerase with intrinsic sequence control, the mechanism underlying this activity remains unclear. In the studies presented here, we provide structural insights into Nocardia brasiliensis GlfT2 (NbrGlfT2) using X-ray crystallography and cryo-electron microscopy. We characterize both the acceptor-bound and membrane-embedded structures of NbrGlfT2 and propose three models for its catalysis: Processive Galactan Sliding, Feedback-Regulated Sequence Control, and Membrane Curvature-Mediated Polymerization. Furthermore, we structurally characterize a previously undescribed GlfT2 paralog from Rhodococcus equi, which we term ReqGlfT3. We confirm its galactofuranosyl transferase activity and identify the production of β(1,3) and β(1,5) linkages. These findings offer new insights into GlfT2 and related polymerizing glycosyltransferases, which will provide insights into enzymatic regioselectivity mechanisms and polysaccharide biosynthesis across the bacterial kingdom.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162321</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Energy Work: Enacting Renewable Transitions in the Deserts of Chile and California</title>
<link>https://hdl.handle.net/1721.1/162320</link>
<description>Making Energy Work: Enacting Renewable Transitions in the Deserts of Chile and California
White-Nockleby, Caroline Celeste
This dissertation explores how different people enact and engage the “Energy Transition,” a temporal orientation increasingly used to describe a range of activities that operate on and through electricity, renewable energy, and fossil fuels. I ask, given the infrastructural, political, economic, and material-semiotic continuities and imbrications between renewable energies and fossil fuels, how do actors craft, stabilize, and mobilize the [just] renewable energy transition? How, in other words, do people distinguish the activities and projects of the energy transition from continuity and more ordinary change? I also ask, what kind of political work is “transition” – along with its usual modifiers, energy, renewability, and justice – doing in the world? Building on scholarship that explores the history, genealogy, materiality, and political economy of energies and resources, I investigate these questions by analyzing energy transition projects across two region-scale field sites: Antofagasta, Chile and Imperial County, California. &#13;
&#13;
I find that self-consciously small-scale technologies like maps, models, and pilot projects are vital to assembling just, renewable resources – and to demarcating particular places and projects as in transition. Though these technologies often aspire to make and move green energy 24/7 and worldwide, they face substantial obstacles to doing so. Their value is, thus, often drawn more from the future they index than their present functionality. I term these temporal indexical technologies “anticipatory devices,” and show that such devices gain significance in relation to the particular forms of expertise that actors draw on to design them. Each chapter, therefore, analyzes a different disciplinary form of expertise in which the concepts of renewability, justice, transition, and energy, aided by various anticipatory devices, take shape: cartography (Chapters 1 and 2), chemistry (Chapter 3), engineering (Chapter 4), and economics (Chapter 5).&#13;
&#13;
Ultimately, I find that energy transition is often treated as a universalizing, singular narrative, which can shape and limit the scope of climate mitigation projects. Profit motives often incentivize corporate actors to design projects to align with the more temporary kinds of transitions that have long been constitutive of capitalism, even though it is longer-term changes that will most effectively mitigate climate change. The same is true for renewability, which can easily be articulated as an ideal that supports visions for unfettered capitalist growth. Moreover, approaches that treat “transition” as universal can also easily echo and reinvigorate an evolutionist approach to time, in which places and countries compare their relative advancement towards carbon neutrality along a single, teleological temporal axis. Yet I also encountered many actors engaged in more situated projects that attended to local histories of land use, industry, and power. These projects pluralized transition – or did not use the term at all – offering situated and distributive visions of energetic change that might enable more regenerative futures to germinate.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162320</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solid-State NMR Characterization of a PET Ligand Binding Sites in AD Tau Fibrils</title>
<link>https://hdl.handle.net/1721.1/162319</link>
<description>Solid-State NMR Characterization of a PET Ligand Binding Sites in AD Tau Fibrils
Angehrn Rodas, Frida Nicole
Aggregation of the tau protein into fibrils is a key feature of Alzheimer's disease (AD) and many other neurodegenerative disorders. Developing small molecules that bind these tau fibrils is important for the diagnosis and treatment of tauopathies. This thesis revolves around a study on the binding sites of a positron emission tomography (PET) ligand, PI-2620, to a recombinant tau construct that adopts the C-shaped AD fold. Using solid state NMR experiments in combination with other techniques such as Transmission Electron microscopy (TEM) as well as docking simulations allowed a better understanding of the binding sites of this PET agent. Specifically, 13C-19F REDOR experiments were used to identify nearby residues to the ligand. PI-2620 was found to bind two primary sites within the C-shaped structure. The docking simulations allowed the proposition of several possible binding poses. Additional 2D NMR experiments suggest that PI-2620 alters the protofilament interfaces. The stoichiometry of PI-2620 binding to tau fibrils was determined to be approximately 20 mol%, with varying degrees of ligand mobility. These findings offer insights into the interaction of this PET tracer with tau fibrils and have implications for the design of improved imaging agents.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162319</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demand‑Driven Decarbonization: Impact of Voluntary 24/7 Low-Carbon Power Procurement</title>
<link>https://hdl.handle.net/1721.1/162318</link>
<description>Demand‑Driven Decarbonization: Impact of Voluntary 24/7 Low-Carbon Power Procurement
Ali, Adam
This thesis examines the impact of voluntary 24/7 (hourly) low-carbon power procurement on grid-wide emissions and investment strategies in generation technologies. Recognizing the growing number of businesses and government agencies making voluntary commitments to reduce greenhouse gas emissions (GHGs) through increased procurement of low-carbon power, this study investigates the effectiveness of these commitments, particularly those aiming for hourly matching of low-carbon energy with consumption. &#13;
&#13;
This study employs GenX, an open-source capacity expansion model, to simulate an electricity market with two classes of buyers. Buyers in one class commit to reduce the carbon intensity of their electricity procurement by some amount, while buyers in the other class procure electricity at minimum cost without any regard to carbon emissions. This setup allows for a detailed examination of how different levels of ambition in voluntary hourly low-carbon commitments influence the electricity system and investment strategies. The study tests both a simpler model without storage and demand-response capabilities and a more complex model that incorporates these elements to assess their impact on meeting hourly clean energy targets.&#13;
&#13;
Our findings suggest that at low to moderate ambition levels of hourly low-carbon electricity procurement, the buyers with voluntary commitments can primarily "reshuffle" built low-carbon generation without incentivizing new clean capacity additions or achieving measurable reductions in system-wide emissions. Significant shifts in generation investments and decreases in total carbon emissions are observed only when commitments exceed a critical threshold, ranging from approximately 70% to 96%, depending on the facts of the system, which happen to be reflected in different model set-ups. Even then, cost-minimizing behavior in voluntary procurement can distort investment, spurring excessive wind and solar builds that exceed what a least‑cost, socially-optimal zero‑carbon portfolio would require.&#13;
&#13;
In conclusion, for voluntary 24/7 procurement to cut emissions materially—and avoid misallocating capital—either ambition must be extremely high or participation must broaden enough to share costs and benefits. Otherwise, committed buyers bear steep costs, non‑participants enjoy spill‑over gains, and the system drifts toward a sub‑optimal technology mix.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162318</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clarifying Decision Making Processes: Tools for Interdependency Modeling</title>
<link>https://hdl.handle.net/1721.1/162317</link>
<description>Clarifying Decision Making Processes: Tools for Interdependency Modeling
Baker, Ellie F.
Tools for problem specification in AI Decision making are underdeveloped at present. I propose two new tools for this purpose; first, a model of AI Decision Making, which supports problem identification and mitigation. Second, a Bill of Assumptions for Data Production. Data is an important component of AI Decision Making Systems, and data is necessarily produced by making a series of assumptions. My Bill of Assumptions for Data Production is a new approach to communicating these assumptions that facilitates collaboration, data transparency, and reduction of harmful bias. I illustrate this new approach by developing a dataset that estimates the distribution of Government education spending in the US across income deciles. My dataset informs existing Distributional National Accounts (DINA), which are a primary measure of income inequality in the US (Piketty et al., 2018). My estimate shows Government education spending is more progressive than assumed in current DINA. Furthermore, I show that removing federal education funding to postsecondary institutions would produce substantial harm.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162317</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Containerless measurement and thermodynamic prediction of the physical properties of liquid steels</title>
<link>https://hdl.handle.net/1721.1/162316</link>
<description>Containerless measurement and thermodynamic prediction of the physical properties of liquid steels
Benderly-Kremen, Ethan
Incomplete control over macrosegregation during steel solidification hinders the development of novel steel alloys and applications by limiting compositions which can be successfully cast. Analysis of macrosegregation at the solidification front is aided by study of the liquid state via fluid mechanics, which can place bounds on when macrosegregation can occur.&#13;
This non-dimensional analysis requires knowledge of the physical properties of the liquid steel: density, surface tension, viscosity- and how they change with composition as solutes are rejected from solid at the solidification front. Macrosegregation is most pronounced in ferrous liquids containing light, non-metallic species, i.e. carbon, oxygen, and sulfur. Yet, existing literature models for predicting the physical properties of liquid alloys are incapable of accounting for these interstitial species inside an iron lattice. Additionally, direct experimental measurement of these properties is hindered by the requisite high temperature, high reactivity of the melt, and the vast composition space of steel alloys.&#13;
&#13;
Herein, both the experimental and modelling challenges are introduced and addressed. An experimental technique using a floating zone furnace, pendant drop geometry, high-speed camera, and video segmentation was developed for simultaneous, containerless, high-throughput measurement of the physical properties of liquid steel samples. The central atom model, a multicomponent solution model, is extended to investigate the statistical structure of alloys consistent with their energetics and solution thermodynamics. This allows liquid structure determination from thermochemical measurements, bypassing structural and atomistic modeling challenges of high-temperature liquid systems.&#13;
&#13;
These methods and models are explored on the binary systems of iron-nickel, the major substitutional alloying element in steel, and iron-carbon, the major interstitial species. Results demonstrate successful liquid property measurement at experimental rates far exceeding traditional high-temperature research and introduce a basis for a unified treatment of thermodynamic and physical properties in multicomponent alloy melts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162316</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transforming geospatial textual data into narrative storytelling visualization</title>
<link>https://hdl.handle.net/1721.1/162315</link>
<description>Transforming geospatial textual data into narrative storytelling visualization
Ma, Ruixian
Current large language models (LLMs) often struggle to integrate geospatial data into dynamic, interactive visualizations, relying instead on text-based outputs. This limitation hinders the full potential of geospatial data to convey complex information through narrativedriven communication, making it difficult for users to interpret the data easily. Meanwhile, existing data visualization tools typically depend on static dashboards and rigid scientific formats, which have a steep learning curve and lack engagement through narrative elements. Audiences, however, are increasingly drawn to story-driven presentations, as seen in platforms pioneered by the MIT Senseable City Lab, and widely popularized by The New York Times and the Washington Post, which use narrative data visualization formats to attract and immerse readers. This gap between the capabilities of current LLM-based tools and users’ preferences presents a unique opportunity to develop a narrative-based geospatial visualization tool that meets these needs. This tool could transform how we communicate spatial data, particularly in fields such as journalism, travel planning, and urban planning, where the ability to convey complex patterns in an engaging manner is essential.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162315</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the Post-Pandemic Urban Activity &amp; Mobility Regime: Implications for Adaptation and Future Planning of Cities and Public Transit Systems</title>
<link>https://hdl.handle.net/1721.1/162314</link>
<description>Quantifying the Post-Pandemic Urban Activity &amp; Mobility Regime: Implications for Adaptation and Future Planning of Cities and Public Transit Systems
Leong, Chee Weng Michael
Between 2019 and 2022, a pattern break during the COVID-19 pandemic introduced consequential changes to the trajectory of urban activity and mobility patterns. This thesis advances both theoretical and practical understandings of this evolving post-pandmemic regime of activity and mobility, as well as its implications for the future of cities and public transit systems, using high-resolution location-based services data and a case study within the Washington, DC metropolitan area. First, a custom analysis framework is developed where geographical units - subcenters and neighborhoods - are designed to provide insight at an interpretable scale that corresponds to policy and business decision making. Second, a custom suite of twelve mobility metrics are curated to distill the applicability of post-pandemic changes in travel patterns to business problems (site selection, network planning, and operations planning) and societal outcomes (social fabric, quality of life, and environmental sustainability). To complement spatial analysis, these metrics are also regressed on socio-economic attributes to provide greater explanatory power. Lastly, key trends in post-pandemic activity and mobility are distilled into eight mega-trends, and their implications for the adaptation of public transportation systems and future urban development are discussed, including complexity from divergent definitions of success among different stakeholders.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162314</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Small Stores, Big Obstacles: Understanding Constraints and Opportunities for Micro-retail Firms</title>
<link>https://hdl.handle.net/1721.1/162313</link>
<description>Small Stores, Big Obstacles: Understanding Constraints and Opportunities for Micro-retail Firms
Cervantes Gil, Sergio Yael
Micro and small enterprises (MSEs), particularly informal micro-retailers known as nanostores, play a vital role in developing economies but remain largely underserved by traditional financial institutions and overlooked in economic policy. In Mexico, nanostores account for more than 95% of businesses and over 10% of national employment, yet face high closure rates, low productivity, and limited access to formal credit. This thesis asks: What structural and contextual factors determine the survival and performance of nanostores, and how can policy better support high-potential firms within this segment? To answer this, the study constructs a longitudinal panel of nanostores using microdata from the Mexican Economic Census (2009, 2014, and 2019), and combines it with municipality-level contextual data including crime, infrastructure, unemployment, electricity costs, and business regulations. It applies survival models to estimate firm closure dynamics and implements a misallocation framework to quantify distortions in capital and labor usage. The results reveal that misallocation—particularly of capital—is pervasive and systematically linked to institutional weaknesses and credit access constraints. In response to the limited real-time data available for this sector, the thesis proposes the LIFT Performance Index, developed by the MIT Low-Income Firms Transformation Lab (MIT LIFT Lab), as a diffusion-based tool for monitoring micro-retailers’ business sentiments using structured operational surveys. A pilot implementation in Argentina demonstrates the index’s potential to generate timely and actionable insights for policymakers and private stakeholders. Overall, this work contributes a novel empirical foundation for understanding heterogeneity within the micro-retail sector and offers a scalable framework for designing targeted, data-driven interventions to support inclusive economic development.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162313</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Probes and Strategies to Study Mycobacterial Cell Envelope Assembly</title>
<link>https://hdl.handle.net/1721.1/162312</link>
<description>Chemical Probes and Strategies to Study Mycobacterial Cell Envelope Assembly
Lee, So Young
The cell envelope of Mycobacterium tuberculosis (Mtb) is central to its pathogenicity, immune evasion, and intrinsic drug resistance. While the importance of its glycan components is well recognized, their structural intricacies have hindered efforts to directly perturb and investigate their function. In this work, we discuss chemical approaches to study and manipulate mycobacterial cell envelope biosynthesis. Specifically, we present biosynthetic glycan labeling strategies that leverage the activity of native glycosyltransferases to probe arabinogalactan and mannose-containing glycolipids. Building upon prior work using lipid-linked probes to label mycobacterial arabinan, Chapter 2 details the development of azido-functionalized farnesyl phosphoryl mannose (AzFPM) probes that mimic native polyprenyl-phosphoryl donors and selectively label mannose-containing glycolipids in live mycobacteria. Chapter 3 showcases how these probes enable glycan substructure-specific labeling and biochemical enrichment of glycolipids across Corynebacterium glutamicum, Mycobacterium smegmatis, and Mtb. This strategy provides a platform to study glycolipid dynamics in wild-type cells, a task previously hindered by the lack of selective labeling tools. In Chapter 4, we further interrogate endogenous glycan biosynthesis by applying biosynthetic labeling probes in C. glutamicum. Perturbation of arabinan structure by probe incorporation led to impaired cell wall integrity and growth defects. In glycosyltransferase deletion strains, altered probe incorporation patterns revealed enzyme-specific roles in glycan assembly and architecture. Beyond novel labeling strategies, Chapter 5 describes the development of targeted inhibitors of galactan biosynthesis, an essential yet underexplored component of the mycobacterial cell wall. We employed a prodrug strategy to inhibit UDP-galactopyranose mutase (UGM), which catalyzes the committed step in Galf production. To overcome delivery challenges, we designed amide prodrugs activated intracellularly by amidases. One prodrug exhibited improved efficacy in Mtb, providing a promising lead for antibiotic development. Collectively, these studies establish biosynthetic labeling and targeted galactan inhibition as powerful tools for dissecting the structure and function of the mycobacterial cell envelope, offering new avenues for developing chemical probes and therapeutics against tuberculosis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162312</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photonic design for chemical analysis</title>
<link>https://hdl.handle.net/1721.1/162311</link>
<description>Photonic design for chemical analysis
Ma, Wenchao
With light being manipulated at subwavelength scales, photonic design has been explored for various applications. In this dissertation, we investigate potential application of photonic design to chemical analysis, with a focus on spectrometry and chemical sensing.&#13;
First, we demonstrate inverse design of single-layer metasurfaces with shape optimization. Each of the designed metasurfaces simultaneously focuses light and shapes the spectra of focused light without using any filters. Thus, both spatial and spectral properties of the meta-optics are engineered. We chose the color matching functions of the CIE 1931 XYZ color space as the target spectral shapes and a distant region with a finite size as the focal area.&#13;
We then present an inverse-design approach for computational spectrometers, in which the scattering media are topology-optimized to achieve higher robustness in inference, without the need of a training set of spectra and noise. Our approach also allows the selection of the inference algorithm to be decoupled from that of the scatterer. For smooth spectra, we devise a regularized reconstruction algorithm based on Chebyshev interpolation, which yields higher accuracy compared with conventional treatment in which the spectra are sampled at equally spaced frequencies or wavelengths with equal weights. Our approaches are numerically demonstrated via inverse design of integrated computational spectrometers and reconstruction of example spectra. The inverse-designed spectrometer exhibits significantly better performance in the presence of noise than its counterparts with random scatterers.&#13;
Furthermore, we discuss chemical detection using optical resonances, which can increase the sensitivity of measurements to material perturbations and also accelerate photochemical reactions. We show that these two effects can be combined multiplicatively, to enhance the detection via weak/low-concentration photochemical reactions far beyond what could previously be attained.  For an optical resonance with a quality factor Q, the sensitivity of our detection scheme is enhanced by ~ Q² (where ~ denotes approximate proportionality), as demonstrated by both theoretical arguments and numerical simulations of a simple optical grating resonance coupled with reaction-diffusion equations.  Such an approach opens a door to further improvements by careful design of the resonance: even a 3-parameter optimization of the grating resonance yields an additional ≈ 7 × improvement.&#13;
Finally, regarding linear electromagnetic systems possessing time-reversal symmetry, we present an approach to bound ratios of internal fields excited from different ports, using only the scattering matrix, improving upon previous related bounds by Sounas and Alù (2017). By reciprocity, emitted-wave amplitudes from internal dipole sources are bounded in a similar way. When applied to coupled-resonant systems, our method constrains ratios of resonant coupling/decay coefficients. We also obtain a relation for the relative phase of fields excited from the two ports and the ratio of field intensities in a two-port system. In addition, although lossy systems do not have time-reversal symmetry, we can still approximately bound loss-induced non-unitarity of the S matrix using only the lossless S matrix. We show numerical validations of the near-tightness of our bounds in various scattering systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162311</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sulfidation of Ternary Oxides: A Thermodynamic and Experimental Study Toward Selective Metal Extraction</title>
<link>https://hdl.handle.net/1721.1/162310</link>
<description>Sulfidation of Ternary Oxides: A Thermodynamic and Experimental Study Toward Selective Metal Extraction
Boury, Charles A.
Conventional metal refining techniques face growing challenges due to increasing ore complexity and their limited ability to accommodate post-consumer recycling feedstocks. Sulfidation is a promising pyrometallurgical approach for the selective separation and recovery of critical elements from such complex feedstocks. This thesis presents a chemical thermodynamic framework for the sulfidation of divalent alkaline earth and transition metal ternary oxides of titanates, molybdates, tungstates, niobates, and tantalates. Modified predominance diagrams were constructed to determine the possible sulfidation outcomes, and a sensitivity analysis on the input thermodynamic data was performed to assess their impact on the outcome of sulfidation. A high-temperature apparatus was designed and tested to compare predicted and observed sulfidation behavior. Together, the model and experimental apparatus provide a new experimental method to estimate the high-temperature Gibbs energy of multicomponent oxides. Application to current chemical metallurgy challenges, including the recycling of tantalum-based capacitors and the refining of tungsten-bearing ores, underscore sulfidation as a powerful step to support new processing route for sustainable metal recovery.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162310</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maximizing Flexibility and Efficacy of Undersea Wireless Power Transfer Systems</title>
<link>https://hdl.handle.net/1721.1/162309</link>
<description>Maximizing Flexibility and Efficacy of Undersea Wireless Power Transfer Systems
Gess, Derek
Autonomous underwater vehicles (AUVs) are an ever-increasingly essential tool for ocean-based applications, whether it be scientifically, economically, or militarily. To advance the capabilities of AUVs, it is crucial to improve the mission time and length of these vehicles. One proposed way to achieve this is with remote undersea wireless power transfer (WPT) systems to allow AUV charging from remote areas of the ocean floor. While there has been significant research in WPT system design, these projects often tailor the design specifications towards a specific AUV shape, size, or power requirement. These point designs have wildly different power outputs, efficiencies, coupling coefficients, sizes, and more, making it difficult to understand how the design parameters affect each of these properties. This paper aims to address this knowledge gap in current undersea WPT systems by designing an equivalent circuit framework for a WPT system with a targeted power output of ~1 kW to show how design parameters such as input voltage, coil size, transfer gap, coupling coefficient, and load resistance affect the power output and efficiency of the charger. Furthermore, the effects of misalignment in vertical and lateral directions for two separate compensation networks – series-series (SS) and series-parallel (SP) – are compared to determine which compensation network would perform best under specified circumstances. The paper then addresses the losses associated with a conductive environment by coupling the circuit model with an electric field model in seawater. The impact of undersea losses on system metrics is quantified, showing a 3% decrease in efficiency as compared to in air. Finally, the study investigates the use of magnetic cores in WPT systems for EM shielding and field-shaping characteristics. A design methodology is introduced to rank material properties based on the desired system performance characteristics. Suggested materials are then chosen according to this ranking and tested using the models derived in the study. By mapping both electrical and magnetic-core design spaces in a conductive seawater environment, this thesis delivers a unified methodology for designing scalable, efficient undersea wireless chargers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162309</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamentals, Voltage Control and Novel Application of Exchange Bias in Magnetic Thin Films</title>
<link>https://hdl.handle.net/1721.1/162308</link>
<description>Fundamentals, Voltage Control and Novel Application of Exchange Bias in Magnetic Thin Films
Hasan, Muhammad Usama
The past half-century has seen remarkable advances in microelectronics, but as transistors approach their physical limits, there is a growing need for beyond-CMOS technologies. Spintronics, which aims to utilize the electron's spin in magnetic thin films for data storage and manipulation, is a promising alternative. Among the rich physical interactions that appear in magnetic thin films, the exchange-bias (EB) effect is essential for many spintronic devices. EB is an effect that arises at ferromagnet/antiferromagnet interfaces, which imposes an internal field on the ferromagnet, enhancing the range of functionalities that can be derived from devices. This thesis explores EB in Co/Co₁₋ₓNiₓO systems at multiple levels, from fundamental understanding to its manipulation and applications. First, we introduce a new model to predict EB in polycrystalline antiferromagnetic thin films and validate it with experimental data. Second, we tackle another fundamental aspect – disentangling the effects of EB on nucleation and propagation of magnetization reversal. We discover that nucleation and propagation EB can be unequal and demonstrate how that can lead to unexpected behavior of the system, including having asymmetric hysteresis loops. Building on these insights, we demonstrate voltage-controlled ionic gating to manipulate EB, achieving cyclic toggling of the EB sign in a ferrimagnetic system, where the magnetization direction is fully determined by the gating state. Furthermore, by targeting the antiferromagnet directly, we discover EB enhancement up to 100%, which can be explained with the help of the model developed earlier. We demonstrate sub-millisecond and analog operation in this system. Finally, a new approach to improving bit-stability whilst preserving performance in magnetic racetrack memory is proposed which involves incorporating an EB layer with the right properties into the track. The benefits obtained from this strategy can help push this next-gen memory device closer to commercialization. We believe the findings in this thesis substantially extend the state-of-the-art in terms of basic understanding of EB, ways of EB manipulation and unexplored use-cases of EB, paving the way for new functionalities in spintronic devices applicable for non-conventional computing paradigms or next-gen memory devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162308</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Urban Resilience to Environmental and Health Risks</title>
<link>https://hdl.handle.net/1721.1/162307</link>
<description>Essays on Urban Resilience to Environmental and Health Risks
Fan, Yichun
Cities today face growing environmental and health threats due to climate change. Building urban resilience requires understanding the complex interplays between environmental and social systems that account for adaptation dynamics. Using new data, computational tools, and economic analysis, this thesis explores how people and places adapt to environmental risks and the implications for urban policy and infrastructure planning.&#13;
&#13;
Chapter 1 examines how the financing structure of climate resilience infrastructure impacts long-term economic dynamics. Using satellite imagery to develop new performance metrics for U.S. flood protection levees, I find that decentralized financing of infrastructure maintenance creates a feedback loop: lower housing values and property tax revenues reduce fiscal capacity for levee maintenance, which increases levee failure risk and further depresses housing values. These feedback dynamics reinforce under-maintenance and perpetuate spatial inequality. &#13;
&#13;
Chapter 2 analyzes the social cost of behavioral adaptation. Leveraging 27 million fitness app exercise records and quasi-experimental designs, I find that heavy air pollution reduces outdoor exercise likelihood by 28%, with information and risk awareness as key moderators. This behavioral response results in significant health costs often overlooked in environmental health studies.&#13;
&#13;
Chapter 3 explores the role of subjective traits in predicting adaptation behavior. Applying Natural Language Processing to social media posts from 500,000 users, we classify individual fear types and find that pre-pandemic fearfulness strongly predicts social distancing behavior during COVID-19. This project provides a scalable tool for measuring unobserved subjective traits to predict behaviors under risk and target interventions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162307</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>City as Seed: The Urban Resonance Field and the Case for Sonic Awareness in Ecological Renewal</title>
<link>https://hdl.handle.net/1721.1/162306</link>
<description>City as Seed: The Urban Resonance Field and the Case for Sonic Awareness in Ecological Renewal
Navarro, Cadine
Seeds, the “abominable mystery” (Darwin, 1897), hold our past and potential future. They also hold sound. Much like cities, they are sites of growth, transformation, and resilience. This thesis draws parallels between laboratory research on the sensing capacities of seeds and embodied experiences of sensing within urban landscapes, exploring how living systems interact with sound and vibration. Through both scientific and poetic approaches, it examines how seeds respond to sonic environments and how this sensitivity can inform human engagement with acoustics in the urban context. The investigation of intangible forces, vibration, resonance, and sound reveals a shared responsiveness between seeds and cities, documented through graphs, sound spectra, and reflective narratives that bridge science and art. Focusing on sound as a strategic lens, this work brings attention to often-overlooked sensory domains, inspiring a more ecologically and socially responsive urbanism. Ultimately, it advocates for practices of deeper listening as a method to engage openly and imaginatively with human and nonhuman worlds, and to reimagine urban environments as spaces of attunement, dialogue, and co-existence.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162306</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monitoring and Treating Neurological Conditions Through Focal Interfacing with the Brain</title>
<link>https://hdl.handle.net/1721.1/162305</link>
<description>Monitoring and Treating Neurological Conditions Through Focal Interfacing with the Brain
Jackson, Hannah Dale
Neurological dysregulation serves as the fundamental basis for a spectrum of debilitating disorders such as Parkinson's disease and epilepsy. Despite considerable efforts, our current comprehension of these disorders and ability to treat them remains limited. Neurochemical sampling of the affected tissue can be used to monitor pathological states, but existing tools are limited by tissue reactivity and suboptimal spatiotemporal resolution. Additionally, methods for treating neurological disorders predominantly rely on systemic drug administration which is hampered by inadequate targeting and off-target effects. &#13;
&#13;
There is a need for minimally invasive modalities to both monitor and treat neurological disorders that have high spatial resolution, maintain chronic functionality, and preserve overall brain function. This thesis presents the development and implementation of neural implants capable of both infusing and sampling sub-microliter volumes of fluid with exceptional spatial precision. These implants utilize micron-scale technology to minimize scarring following implantation and allow for sustained chronic functionality. We use these devices to answer two key questions: (1) Can the localized delivery of drugs to specific neural circuits provide effective treatment for neurological diseases? and (2) Can micro-invasive sampling of brain interstitial fluid facilitate disease diagnosis and monitoring?&#13;
&#13;
We assessed our ability to treat focal epilepsy with this platform by delivering antiseizure medications directly to the seizure focus in a mouse model of temporal lobe epilepsy. We found that localized drug delivery effectively suppressed seizure activity without adverse effects. We also explored micro-invasive, membrane-free sampling of interstitial fluid from different brain regions using our device. We detected hundreds of distinct proteins from minute sample volumes with high spatial resolution and minimal tissue damage. The results from both studies highlight the platform’s potential for targeted drug delivery and biomarker detection across a variety of disease states.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162305</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward sequence-to-structure predictions of chromatin: Generative AI sheds light on genome organization</title>
<link>https://hdl.handle.net/1721.1/162304</link>
<description>Toward sequence-to-structure predictions of chromatin: Generative AI sheds light on genome organization
Schuette, Greg
The secrets of the genome have captivated scientists for well over a century, though the active role its spatial organization plays in gene regulation, cell determination, and disease formation has become clear only in recent decades. Significant strides have been made toward characterizing and understanding three-dimensional genome organization, but the scale, complexity, and heterogeneity of the genome and nuclear environment complicate investigations into this system. This thesis alleviates these challenges and holds the potential to accelerate genome organization research by presenting several methodological advances.&#13;
&#13;
An efficient Hi-C inversion algorithm appears first. This technique extracts pairwise contact potentials from experimental Hi-C data, uncovering mechanistic details obscured by the correlation between Hi-C contact probabilities. This required the development of a spin-glass model of chromatin and the derivation of a corresponding model inversion; the model may find use in further theoretical studies of chromatin, while the inversion can be applied more broadly. The inversion successfully revealed the location of chromatin loop anchors, supported the phase separation formation of chromatin compartments, and parameterized polymer models that reproduced the experimental Hi-C data with reasonable accuracy.&#13;
&#13;
The focus then shifts toward ChromoGen, a generative AI model that predicts three-dimensional chromatin structures directly from DNA sequence and chromatin accessibility data. ChromoGen provided biologically accurate structural ensembles throughout the genome of two cell types, including one omitted from its training data. This transferability suggests that ChromoGen can provide access to the organization of chromatin in a wide variety of cell types while only relying on widely available sequencing data. &#13;
&#13;
Afterward, we discuss several strategies to extend ChromoGen to full-chromosome structure prediction tasks. Preliminary results suggest that the technology of today can provide this capability, as we have generated physical chromosome conformations for mouse chromosomes, although sequencing data did not guide this generative process. Correspondingly, we explore the possibility of incorporating a multimodal model with ChromoGen, allowing it to condition structure generation on a wide variety of data types. Success in this area could enable true de novo structure prediction, greatly simplifying research aiming to understand the relationship between sequence, structure, and cellular function while also accelerating the development of treatments for diseases that implicate chromatin dysregulation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162304</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reparative Urban Science: Challenging the Myth of Neutrality and &#13;
Crafting Data-Driven Narratives</title>
<link>https://hdl.handle.net/1721.1/162303</link>
<description>Reparative Urban Science: Challenging the Myth of Neutrality and &#13;
Crafting Data-Driven Narratives
So, Wonyoung
This dissertation constructs a distinctive lens on how we should see urban technology in the context of a long history of systemic racism, and how we can take a reparative approach to intervene in contemporary situations of racial inequality with technology/data as a method to address systemic racism. The current discourse of urban science often puts an emphasis on newly available (and big) data, primarily values methodologies of “hard” sciences such as physics, computer science, and mathematics, and evolves to incorporate the latest technologies and analytic methods including machine learning and artificial intelligence. However, the role of urban science and analytics that “move[s] beyond analysis” has not been extensively theorized. In particular, the relationship between urban technologies, white supremacy, and racial capitalism has not been extensively studied. Nonetheless, the impact of the applications of such “urban analysis” on people’s lives has been substantial. Building on planning scholars’ calls for reparative planning and emerging discourses of “algorithmic reparation,” this dissertation proposes a normative framework of reparative urban science that challenges whiteness in urban science and embraces the epistemologies and methodologies of reparations. &#13;
&#13;
The dissertation follows a three-paper structure, with the first paper serving as the theoretical framework for two empirical studies, and includes a concluding chapter. The first paper introduces the overarching theory of reparative urban science, identifying three mechanisms—formalizing, context removal and legitimization, and penalization—through which urban technologies perpetuate historical inequalities under a race-neutral guise. It then proposes reparative methodologies, including algorithmic auditing, crowd-sourced community data collection, and algorithms designed to simulate and deliberate reparative futures. The second and third papers demonstrate reparative urban science in action, exemplifying these methodologies. The second paper investigates tenant screening services and landlord decision-making. It reveals the mechanisms how tenant screening algorithms contribute to obscuring historical racial biases, and how landlords interact with them to exert harms. The third paper evaluates the reparative potential of housing programs using algorithmic methods, particularly comparing race-neutral versus race-conscious Special Purpose Credit Programs (SPCPs). It demonstrates that race-conscious SPCPs could significantly reduce the racial housing wealth gap than race-neutral ones, showing race-conscious policies as potential reparative tools. The concluding chapter explores theoretical and practical considerations of housing reparations through the lens of reparative justice, arguing for a deeper acknowledgment of the historical and structural harms related to land and property. Overall, this dissertation seeks to reorient urban science toward justice and repair, envisioning a transformative path forward that actively confronts historical harms and promotes healing and equity in urban futures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162303</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Growth-Induced Cation Order and Magnetic Anisotropy Engineering in Iron Garnet Thin Films</title>
<link>https://hdl.handle.net/1721.1/162302</link>
<description>Growth-Induced Cation Order and Magnetic Anisotropy Engineering in Iron Garnet Thin Films
Kaczmarek, Allison C.
At its heart, Materials Science and Engineering is a discipline seeking to advance technologies by improving the materials that make them. Currently, the development of electronic devices is limited by the need for materials that enhance speed, reduce size, and improve energy efficiency. Spin-based memory devices, which encode data in a material’s magnetic state, offer a promising solution. Magnetic memory technologies are widespread today, powering devices such as memory disks, tapes, and magnetic random access memory (MRAM). While garnet materials have long been studied for these applications, they have faced challenges in becoming adoptable technologies. However, with advanced research techniques and a deeper understanding of material behaviors, magnetic garnets are experiencing a renaissance. This thesis explores the engineering of iron garnet thin films for next-generation spin-based memory applications. The work presented advances the understanding of non-equilibrium growth, characterization, and engineering of iron garnet thin films and their magnetic properties, emphasizing kinetic phenomena that govern atomic organization beyond classic ordering of unit cells and emergent magnetic anisotropy. In a composition series of europium-thulium iron garnet (EuTmIG) films, experiment confirms the 50-year-old theory of cation site preference of Eu and Tm, demonstrating that enhanced magnetic anisotropy, named ’magnetotaxial anisotropy’, is linked to cation ordering during growth. These findings lay the foundation for anisotropy engineering by cation order. Further studies investigate the effects of film formulation, growth kinetics, and post-growth annealing on structural ordering and magnetotaxial anisotropy. In bismuth-yttrium iron garnet (BiYIG) films, a linear relationship between Bi-Y ordering, magnetic anisotropy, and substrate-lattice mismatch provides deeper insight into the forces that drive cation ordering. Annealing is shown to further enhance magnetic anisotropy in these films. In lutetium-yttrium iron garnet (LuYIG) films, the laser pulse rate during growth by pulsed laser deposition is shown to influence Lu-Y ordering and magnetic anisotropy, reinforcing the kinetic nature of the cation ordering. The findings of this thesis contribute to the fundamental understanding of cation ordering in complex oxide films and provide a framework for engineering and characterizing garnet materials, enabling the future development of new spintronic devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162302</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling and probing nonlinear collective mode dynamics in quantum materials</title>
<link>https://hdl.handle.net/1721.1/162301</link>
<description>Controlling and probing nonlinear collective mode dynamics in quantum materials
Zhang, Zhuquan
Tailored laser pulses offer a powerful means of driving materials out of equilibrium by selectively addressing specific degrees of freedom. In particular, the excitation of low-energy collective modes in solids—such as lattice vibrations (phonons) and spin precessions (magnons)—to large amplitudes opens fundamentally new pathways for controlling and probing material properties that are otherwise inaccessible under thermal equilibrium conditions. In this regime, both the nonlinear interactions between light and matter and the intrinsic nonlinear dynamics of the driven modes present significant challenges for understanding the underlying mechanisms and for realizing potential applications.&#13;
&#13;
This dissertation centers on two major themes: (1) probing equilibrium properties of materials via nonlinear light-matter interactions; and (2) unveiling emergent phenomena hidden in equilibrium by driving collective modes far from equilibrium. &#13;
&#13;
I begin by providing an overview of recent advances in controlling and probing quantum materials out of equilibrium, followed by a discussion of the theoretical frameworks and experimental methodologies used to interrogate collective excitations. Building on this foundation, I present two studies demonstrating how terahertz Raman excitation can reveal distinct spectroscopic signatures of material states.&#13;
&#13;
Subsequently, I focus on coherent nonlinear magnon-magnon interactions in canted antiferromagnets, induced by tailored terahertz fields. In these experiments, we demonstrate a unidirectional magnon upconversion process and identify correlated magnonic responses at both the sum and difference frequencies of the interacting modes. We achieve parametric amplification of magnon coherence by tuning the magnonic difference-frequency generation into resonance with a low-frequency magnon. Furthermore, by increasing the driving field strength to access a far-from-equilibrium regime, we uncover spectroscopic signatures of non-perturbative dynamics marked by strong magnon self-interactions.&#13;
&#13;
Finally, I present an example in which spatially heterogeneous responses of electromagnon modes in a van der Waals multiferroic are revealed through terahertz photon echo measurements. Together, these results highlight how tailored light-matter interactions can be leveraged to probe, control, and manipulate material degrees of freedom, both in and out of equilibrium.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162301</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Brain Somatic Mosaicism with New Single-Cell Copy Number Analysis Methods</title>
<link>https://hdl.handle.net/1721.1/162300</link>
<description>Decoding Brain Somatic Mosaicism with New Single-Cell Copy Number Analysis Methods
Zhao, Yifan
Copy number variants (CNVs) represent a significant but understudied form of somatic variation in the human brain, with potential implications for neurodevelopment, aging and disease. While single-cell whole-genome sequencing (scWGS) enables genome-wide profiling at single-cell resolution, existing computational methods struggle to accurately detect non-clonal CNVs, limiting our understanding of genomic mosaicism in the brain. In this thesis, I present two novel and complementary computational approaches for high-resolution CNV analysis in single cells. The first, HiScanner, is a CNV detection method that integrates single-cell assay-specific characteristics and introduces innovations in bin size optimization, read depth normalization, and joint segmentation across cells. Through extensive benchmarking experiments, I demonstrate HiScanner’s superior performance compared to existing tools. The second is a validation method that leverages unique molecular patterns from tagmentation-based scWGS, representing the first tool that exploits fragment overlap patterns to corroborate CNV predictions. I then apply these tools to investigate CNVs in three biological contexts: tumor evolution in paired initial and recurrent meningiomas, age-related genomic changes in neurotypical human brains, and developmental patterns in fetal and postnatal brain tissues. By analyzing both scWGS and multimodal single-cell data (paired RNA-seq and ATAC-seq), I characterize cell-type-specific CNV patterns and their potential functional implications. This work establishes a robust framework for studying somatic CNVs at single-cell resolution and provides insights into genomic instability in brain development, aging, and disease.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162300</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical Vulnerabilities of AI in Latin America</title>
<link>https://hdl.handle.net/1721.1/162299</link>
<description>Critical Vulnerabilities of AI in Latin America
Dobles Camargo, Claudia
Artificial Intelligence (AI) is rapidly reshaping societies, economies, and governance systems worldwide. While it offers tools for addressing critical challenges—such as climate change, health care, and educational inequity—it also risks deepening historical inequalities, undermining democratic institutions, and exacerbating global technological dependencies if not ethically governed. Latin America faces unique vulnerabilities for the development and use of AI, currently underexplored in existing scholarship—such as informal data work or the territorial principle and its implications on AI law enforcement—this study investigates AI's critical vulnerabilities within the Latin American context in order to determine and provide regional and national policies to advance on an inclusive, strategic, and ethical approach for developing and deploying AI systems in Latin America. The study seeks to answer the question through a cross-analysis and comparative case study of six countries (Brazil, Chile, México Costa Rica, El Salvador and Honduras) drawing on existing and recent global and regional benchmarks, including the Stanford HAI AI Index (2024), UNESCO’s Recommendation on the Ethics of AI (2021), and the Latin American AI Index (ILIA 2024). The countries were selected based on a broad range of AI readiness levels, focusing on mapping institutional, regulatory, and socio-political contexts as well as metrics and input from relevant sources. The analysis shows structural inequality as the core vulnerability shaping AI’s impact in Latin America, alongside governance gaps, limited regional cooperation, and minimal public participation. The analysis identifies ten critical vulnerabilities—including the use of AI in surveillance, increase in inequality, increase in disinformation, AI-use in organized crime, and environmental exploitation—that, if unaddressed, may accelerate democratic erosion and technological dependency. Ethical principles are shown to be deeply interconnected and grounded in human rights, yet their implementation remains aspirational. This research underscores a call for action toward regional coordination, inclusive education strategies prioritizing gender policies and rural areas, and aligned industrial policies in the countries of the region. A Latin American context-specific, collective approach ensures that AI serves the public interest, strengthens sovereignty, and supports equitable development in Latin America.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162299</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Origin and Correlates of Viral Rebound in SIV-Infected Rhesus Macaques Following Discontinuation of ART</title>
<link>https://hdl.handle.net/1721.1/162298</link>
<description>Origin and Correlates of Viral Rebound in SIV-Infected Rhesus Macaques Following Discontinuation of ART
King, Irena V.
The earliest events of viral rebound following discontinuation of ART in people living with Human Immunodeficiency Virus-1 remain largely unknown. We investigated detailed reservoir characteristics and viral rebound dynamics in 18 Simian Immunodeficiency Virus-infected rhesus macaques treated with antiretroviral therapy for 70 weeks and then necropsied after a 12-day analytical treatment interruption (ATI). Using molecularly barcoded SIVmac239M, we tracked viral clonotypes following ATI in both peripheral blood and tissues at necropsy. Viral rebound appeared to originate by reactivation of a single or a few barcode clonotypes from a limited number of deep lymph nodes or gastrointestinal tissues, followed by rapid virus replication of this clonotype in peripheral blood and tissues as well as serial reactivation of multiple additional barcode clonotypes from different anatomic sites, resulting in oligoclonal plasma viremia. Daily transcriptomic and proteomics profiling in peripheral blood following ATI identified early upregulation of pathways related to T cell signaling, cytokine responses, and metabolism prior to detectable plasma viremia, presumably reflecting initial viral replication in tissues. Taken together, these data provide a detailed anatomic, virologic, and immunologic characterization of viral rebound in SIV-infected macaques following ATI, which provides critical information to inform the development of next generation HIV-1 cure strategies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162298</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale Modeling of Genome Organization: Bridging Polymer Physics, Molecular Dynamics, and AI</title>
<link>https://hdl.handle.net/1721.1/162297</link>
<description>Multiscale Modeling of Genome Organization: Bridging Polymer Physics, Molecular Dynamics, and AI
Lao, Zhuohan
The human genome is intricately organized within the nucleus, and its spatial arrangement plays a critical role in gene regulation, cellular function, and disease. Recent advances in high- throughput experiments have unveiled the heterogeneous and dynamic nature of chromatin organization at single-cell resolution. However, computational tools that can both simulate and predict such complex structures are still limited. In this thesis, we develop and apply computational frameworks to investigate nuclear genome organization at high spatial and temporal resolution. Our approaches integrate biophysical modeling and generative artificial intelligence to address complementary aspects of nuclear architecture.&#13;
&#13;
In Chapter 1, we provide an overview of the hierarchical organization of the genome and discuss emerging principles that govern chromatin folding, nuclear compartmentalization, and their functional implications. We introduce data-driven, physics-based, and generative artificial intelligence modeling approaches, highlighting the need for interpretable and efficient models capable of capturing the structural diversity of the nucleus across individual cells.&#13;
&#13;
In Chapter 2, we present OpenNucleome, a high-resolution molecular dynamics framework for simulating the entire human nucleus at 100-kilobase resolution. OpenNucleome incorpo- rates explicit representations of chromosomes, nuclear bodies, and the nuclear lamina, and faithfully reproduces experimental data from Hi-C, TSA-seq, DamID, and DNA-MERFISH. The developed software is fully open-source and GPU-accelerated, enabling large-scale simu- lations and mechanistic explorations.&#13;
&#13;
In Chapter 3, we explore the impact of genome organization on various biological phe- nomena within the cell nucleus—focusing on telomere and telomere condensate dynamics, and nuclear deformation—using OpenNucleome. Our results demonstrate that the three- dimensional genome architecture plays a pivotal role in governing the dynamics of genomic loci such as telomeres, influencing the kinetics and outcomes of droplet coarsening. More- over, specific interactions between the genome and nuclear bodies form robustly across cells, providing strong support for a nuclear zoning model of genome function.&#13;
&#13;
In Chapter 4, we introduce ChromoGen, a generative diffusion model that predicts single- cell chromatin conformations de novo from DNA sequence and DNase-seq data. Unlike traditional simulation frameworks, ChromoGen learns from experimental single-cell 3D structures to generate physically realistic, region- and cell type-specific ensembles. ChromoGen achieves high agreement with both experimental Dip-C and Hi-C data while maintaining computational efficiency, enabling rapid exploration of chromatin heterogeneity across the genome and cell types.&#13;
&#13;
Together, these two frameworks—OpenNucleome and ChromoGen—provide powerful and complementary tools for understanding genome structure and function at the single-cell level, bridging physics-based modeling and deep generative artificial intelligence modeling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162297</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On single-cell immune dynamics of chronic HIV infection and treatment in rhesus macaque models</title>
<link>https://hdl.handle.net/1721.1/162296</link>
<description>On single-cell immune dynamics of chronic HIV infection and treatment in rhesus macaque models
Quinn, Sarah Lynne
Human Immunodeficiency Virus (HIV) continues to be an overwhelming challenge in both  global health and immunology. With no cure available, 30 million people worldwide  rely on antiretroviral therapy (ART) to prevent transmission and disease progression. However, individuals on ART are at elevated risk for numerous comorbidities and are susceptible to continued disease progression should treatment be stopped or interrupted. Addressing the challenges resulting from treatment and lack of cure requires a deeper understanding of the complex underlying immunology of HIV infection, treatment, and therapeutics. &#13;
Single-cell RNA sequencing (scRNAseq) is continually advancing our understanding of immune dynamics and when combined with well-characterized rhesus macaque models of HIV, provides an opportunity to profile immune perturbations over extensive time courses in a controlled setting. In this thesis, I present two studies that further our understanding of the host immune response across stages of infection, treatment, and therapeutic intervention, using rhesus macaque models.  &#13;
In the first study (Chapter 2), I comprehensively profiled immune dynamics during untreated infection, ART initiation, and long-term ART, leveraging a longitudinal cohort of SIV-infected (Simian Immunodeficiency Virus) macaques. This work is particularly relevant given the increasing age and average time spend on ART among people living with HIV. scRNAseq revealed key immune shifts during acute and chronic infection, as well as over five years of subsequent ART. I identified cell type composition shifts during prolonged untreated infection and uncovered areas of unresolved immune dysregulation despite long-term ART– most prominently among myeloid gene expression and enrichment. I further link transcriptional changes to intact proviral burden and identified ribosomal pathways as markers infection stage, treatment status, reservoir size. Finally, by evaluating published immune correlates of treatment outcome, I identify nuances in signatures changing and remaining stable with time on ART. &#13;
In the second study (Chapter 3), I expand on the baseline infection and treatment case by evaluating immune dynamics in response to a post-exposure combination therapeutic (Ad26/MVA + PGT121 + Vesatolimod) previously shown to induce post-ART viral control in most (7/10) treated macaques with Simian Human Immunodeficiency Virus (SHIV). Here I identify features of therapeutic response and implicate a previously defined Antibody-Dependent Cellular Phagocytosis signature as being associated with control. Furthermore, I identify a new cytotoxic transcriptional module in T and NK cells associated with both non-rebounding animals and post-rebound controller animals, suggesting a shared effector program associated with successful virologic control. &#13;
Supported by a thorough introduction on immunological techniques, questions and applications to HIV studies (Chapter 1), and a discussion of intersectionality and future directions of the field (Chapter 4), this thesis provides a comprehensive analysis of immune dynamics across the lifecycle of viral infection, treatment, and therapeutic intervention in macaque models of HIV. This work demonstrates how the host immune environment influences therapeutic success, laying a foundation for future therapeutic design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162296</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adsorption and electrostatic potentials at the electrochemical interface</title>
<link>https://hdl.handle.net/1721.1/162295</link>
<description>Adsorption and electrostatic potentials at the electrochemical interface
Nowack, Linsey
This paper explores adsorption in electrochemical systems. Part I reviews ways to quantify adsorption, the energetic considerations that make adsorption favorable or unfavorable, how to measure adsorption experimentally, and how it has previously been modeled using extensions of the Langmuir isotherm. At the end of Part I, a simple Monte Carlo model (MC) is applied to a very complex carbon dioxide reduction system to study competitive adsorption. This application of Monte Carlo simulations demonstrates the challenges of extracting meaningful parameters from empirically fitting MC simulations to isotherms derived from nanoparticle-enhanced Raman spectra.&#13;
&#13;
Part II examines how adsorption influences the electrostatic potential in the electrochemical double layer using molecular dynamics simulations. Adding on to previous work from the Willard group, this chapter calculates how adsorbate polarity and coverage influences two characterizations of Coulombic interactions: the Poisson potential and the Madelung potential. Both potentials, while having different shapes as a function of distance from the electrode surface, exhibit strong sensitivity to water structure. At high coverage, adsorbates decrease the number of interfacial waters, shifts the position of the molecular layers of waters at the interface, and disrupts the water's orientational order. Lastly, cross-sections of the 3D Poisson potential parallel to the electrode surface reveal large heterogeneity in Poisson potential values as a result of adsorbates. This suggests that 1D electrostatic potential profiles are not enough to understand forces in the electrochemical double layer.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162295</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The combustion of droplets of heavy liquid fuels</title>
<link>https://hdl.handle.net/1721.1/162243</link>
<description>The combustion of droplets of heavy liquid fuels
Simpson, Hugh Cameron.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1954; Vita.; Bibliography: leaves 542-552.
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162243</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The melting points of isopropyl esters of aromatic nitro acids</title>
<link>https://hdl.handle.net/1721.1/162242</link>
<description>The melting points of isopropyl esters of aromatic nitro acids
Zeng, Zhaolun,
            1898-1967.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1923
</description>
<pubDate>Mon, 01 Jan 1923 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162242</guid>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the use of selected derivatives in the specific characterization of alcohols, phenols, amines and mercaptans</title>
<link>https://hdl.handle.net/1721.1/162241</link>
<description>On the use of selected derivatives in the specific characterization of alcohols, phenols, amines and mercaptans
Zeng, Zhaolun,
            1898-1967.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemistry, 1926; Vita.
</description>
<pubDate>Fri, 01 Jan 1926 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162241</guid>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of certain concrete sands in the vicinity of Portland, Maine</title>
<link>https://hdl.handle.net/1721.1/162240</link>
<description>An investigation of certain concrete sands in the vicinity of Portland, Maine
Blandford, Sidney E.
            (Sidney Edgar); Cheney, Laurence B.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1927
</description>
<pubDate>Sat, 01 Jan 1927 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162240</guid>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements on contact potentials of metals</title>
<link>https://hdl.handle.net/1721.1/162239</link>
<description>Measurements on contact potentials of metals
Zisman, William A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1928; Includes bibliographical references (leaves 59-60).
</description>
<pubDate>Sun, 01 Jan 1928 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162239</guid>
<dc:date>1928-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A department store fore the Hudson Bay Company Edmundton, Alberta, Canada</title>
<link>https://hdl.handle.net/1721.1/162238</link>
<description>A department store fore the Hudson Bay Company Edmundton, Alberta, Canada
Thrift, Eric W.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162238</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The misfit and the non-specialist student at M.I.T.</title>
<link>https://hdl.handle.net/1721.1/162237</link>
<description>The misfit and the non-specialist student at M.I.T.
Peskoe, Irving.
Thesis: B.S., Massachusetts Institute of Technology, Department of General Science, 1939; Includes bibliographical references (leaf 64).
</description>
<pubDate>Sun, 01 Jan 1939 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162237</guid>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The buckling load of hyperbolic paraboloid shells</title>
<link>https://hdl.handle.net/1721.1/162236</link>
<description>The buckling load of hyperbolic paraboloid shells
Lee, Samuel Tak.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1962; Includes bibliographical references (leaf 85).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162236</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of maladjustment in the class of 1955 at M. I. T.</title>
<link>https://hdl.handle.net/1721.1/162235</link>
<description>A study of maladjustment in the class of 1955 at M. I. T.
Langberg, Arnold.
Thesis: B.S., Massachusetts Institute of Technology, Department of General Engineering, 1955
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162235</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Education in the humanities at the Massachusetts Institute of Technology.</title>
<link>https://hdl.handle.net/1721.1/162234</link>
<description>Education in the humanities at the Massachusetts Institute of Technology.
Perrolle, Judith Ann.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1966; Bibliography: leaves 98-99.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162234</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Going to MIT.</title>
<link>https://hdl.handle.net/1721.1/162233</link>
<description>Going to MIT.
Landau, David Lewis.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162233</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The chemistry of octahalodirhenate (III).</title>
<link>https://hdl.handle.net/1721.1/162232</link>
<description>The chemistry of octahalodirhenate (III).
Robinson, William Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1966; Bibliography: leaves 112-116.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162232</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The 1970 student strike: the children's crusade at MIT; a study in contemporary history.</title>
<link>https://hdl.handle.net/1721.1/162231</link>
<description>The 1970 student strike: the children's crusade at MIT; a study in contemporary history.
Giguere, Lee David.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1973; Leaf 69 used twice.; Bibliography: leaf 163.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162231</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The style of M.I.T. education.</title>
<link>https://hdl.handle.net/1721.1/162230</link>
<description>The style of M.I.T. education.
Green, Patrick Conal.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1970; Bibliography: leaf 55.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162230</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the economics of advertising.</title>
<link>https://hdl.handle.net/1721.1/162229</link>
<description>On the economics of advertising.
Schmalensee, Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1970; Vita.; Bibliography: leaves 485-496.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162229</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An essay on taxation and growth in an enclave economy.</title>
<link>https://hdl.handle.net/1721.1/162228</link>
<description>An essay on taxation and growth in an enclave economy.
Francis, Alfred Alexander Jaques.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1968; Bibliography: leaves 130-131.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162228</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An examination of low hysteresis polyurethane for use in traction drive rolling elements</title>
<link>https://hdl.handle.net/1721.1/162227</link>
<description>An examination of low hysteresis polyurethane for use in traction drive rolling elements
Castellano, John Philip.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162227</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Career goals, attitudes, and interpersonal relationships of M.I.T. undergraduates</title>
<link>https://hdl.handle.net/1721.1/162226</link>
<description>Career goals, attitudes, and interpersonal relationships of M.I.T. undergraduates
Slaughter, Sarah.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1982; Bibliography: leaf 99.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162226</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The work and meanings of science and engineering : interviews with MIT scholars</title>
<link>https://hdl.handle.net/1721.1/162225</link>
<description>The work and meanings of science and engineering : interviews with MIT scholars
Karaku, Alex Theodore.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1982; Bibliography: leaf 14.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162225</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonpolarized relay</title>
<link>https://hdl.handle.net/1721.1/162224</link>
<description>Nonpolarized relay
Tseng, C. C.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1915
</description>
<pubDate>Fri, 01 Jan 1915 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162224</guid>
<dc:date>1915-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Hiddenness Argument and The Limits of Doxastic Positioning</title>
<link>https://hdl.handle.net/1721.1/162162</link>
<description>The Hiddenness Argument and The Limits of Doxastic Positioning
Garcia, Nicole
If God exists, how clear or obvious should we expect his existence to be? Particularly if such a God is interested in having a personal relationship with us? The Hiddenness Argument contends that it should be much clearer than it in fact is. If God exists and really wants us to know as much, we should expect to inhabit a very different epistemic situation than we in fact do – one that rules out the possibility of rational nonbelief. The evidence available for God’s existence should be so definitive that it would be impossible for us to fail to believe on good epistemic terms. &#13;
My dissertation sets out to delegitimize this expectation. While available objections to it challenge its propriety – God may have overriding reasons for disclosing his existence in a way that allows for rational nonbelief – my account challenges its feasibility – whether it is in principle possible to meet. Expecting divine self-disclosure to rule out rational nonbelief assumes that it can rule out rational nonbelief – but can it? By homing in on the nature, mechanics, and limitations of disclosure itself, I show it cannot.&#13;
Divine self-disclosure is an instance of what I call doxastic positioning: the process of positioning someone to rationally form some belief – in this case, theistic belief. To rule out rational nonbelief, God’s disclosure would need to make theistic belief a universal rational requirement. Given the success conditions of doxastic positioning, this would involve the provision of sufficient evidence as well as the universal possession and appreciability of said evidence. But no matter what evidence God provides, God cannot guarantee on pain of irrationality that humans will possess or be in a position to appreciate the available evidence, leaving rational nonbelief an ever live possibility. —Which is just to say that divine self-disclosure cannot rule out rational nonbelief. The expectation that it would do so is, then, illegitimate and the hiddenness argument depending on it fails.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162162</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proteolethargy is a pathogenic mechanism in chronic disease</title>
<link>https://hdl.handle.net/1721.1/162161</link>
<description>Proteolethargy is a pathogenic mechanism in chronic disease
Moreno, Shannon
The pathogenic mechanisms of many diseases are well understood at the molecular level, but there are prevalent syndromes associated with pathogenic signaling, such as diabetes and chronic inflammation, where our understanding is more limited. Here, I present evidence that pathogenic signaling suppresses the mobility of a spectrum of proteins that play essential roles in cellular functions known to be dysregulated in these chronic diseases. The reduced protein mobility, which we call proteolethargy, was linked to cysteine residues in the affected proteins and signaling-related increases in excess reactive oxygen species. Diverse pathogenic stimuli, including hyperglycemia, dyslipidemia, and inflammation, produce similar reduced protein mobility phenotypes. I propose that proteolethargy is an overlooked cellular mechanism that may account for various pathogenic features of diverse chronic diseases.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162161</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Sequences Underlying Directed Turning in C. elegans</title>
<link>https://hdl.handle.net/1721.1/162160</link>
<description>Neural Sequences Underlying Directed Turning in C. elegans
Kramer, Talya
Complex behaviors like navigation rely on sequenced motor outputs that combine to generate effective movement. The brain-wide organization of the circuits that integrate sensory signals to select and execute appropriate motor sequences is not well understood. Here, we characterize the architecture of neural circuits that control C. elegans olfactory navigation. We identify error-correcting turns during navigation and use whole-brain calcium imaging and cell-specific perturbations to determine their neural underpinnings. These turns occur as motor sequences accompanied by neural sequences, in which defined neurons activate in a stereotyped order during each turn. Distinct neurons in this sequence respond to sensory cues, anticipate upcoming turn directions, and drive movement, linking key features of this sensorimotor behavior across time. The neuromodulator tyramine coordinates these sequential brain dynamics. Our results illustrate how neuromodulation can act on a defined neural architecture to generate sequential patterns of activity that link sensory cues to motor actions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162160</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Dimensional Statistics for Causal Inference and Panel Data</title>
<link>https://hdl.handle.net/1721.1/162159</link>
<description>High-Dimensional Statistics for Causal Inference and Panel Data
Klosin, Sylvia
This dissertation develops new econometric tools for causal inference in panel data settings, with a focus on addressing key biases that arise in high-dimensional and dynamic environments. While this dissertation is motivated by the need to flexibly measure the economic impacts of climate change, the methods I develop are much more general. They apply broadly to panel data problems across empirical economics—including in labor, development, and industrial organization—where standard fixed effects estimators may fail.&#13;
The first chapter identifies a previously overlooked source of bias in fixed effects panel estimators, which I term dynamic bias. This bias arises when dynamic feedback—where past outcomes influence current outcomes—is ignored in the estimating equation. I show that dynamic bias can be severe even when treatments are randomly assigned and that it often exceeds the well-known Nickell bias. To address this, I develop a bias-corrected estimator that is consistent in panels with a fixed number of time periods. I apply this method to estimate the effects of temperature shocks on GDP, where accounting for dynamic feedback reduces estimated damages substantially. The second chapter, coauthored with Max Vilgalys, proposes a flexible estimator for continuous treatment effects using panel data with fixed effects. We extend the double debiased machine learning (DML) framework to this setting and prove consistency and asymptotic normality. In an application to U.S. agriculture, we show that our estimator captures nonlinear effects of temperature on crop yields more accurately than standard linear models, estimating substantially larger damages from extreme heat. The final chapter further generalizes the methodological contribution by introducing a non-parametric estimator of the average dose-response function. Building on recent developments in DML and automatic double machine learning (ADML), I propose a novel debiasing strategy that directly estimates the bias correction term, yielding favorable theoretical properties.&#13;
Together, these essays provide practical and theoretically grounded tools for applied researchers working with panel data, particularly in settings characterized by high dimensionality, continuous treatments, or dynamic feedback.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162159</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Health Economics</title>
<link>https://hdl.handle.net/1721.1/162158</link>
<description>Essays in Health Economics
Moran, Kelsey C.
This dissertation comprises three essays in health economics. The first paper studies how imperfect electronic health record (EHR) system compatibility, or interoperability, affects patients. My coauthors, Rebekah Dix and Thi Mai Anh Nguyen, and I find that improved EHR interoperability between hospitals leads to better health outcomes and lower costs for shared patients. We also show that hospitals prefer sending patients to facilities with more compatible EHR systems, causing patient reallocation across providers based on technological factors. Using a model of patient flows, we estimate that eliminating these frictions would generate substantial welfare gains by improving patient outcomes and reducing allocative distortions. The second paper examines how regulatory requirements influence hospital charity care by analyzing the Hill-Burton Act of 1946, which allocated $6 billion to over 3,500 hospitals in exchange for providing free care to uninsured patients. I find that after these obligations expire, hospitals strategically reduce charity care by 30% and decrease admissions of charity-eligible patients by 14%. These patients subsequently shift to neighboring public and non-profit hospitals, where they must pay for care and experience higher mortality rates. The third paper, co-authored with Ari Bronsoler, Joseph Doyle, and John Van Reenen, studies the broad impact of Health Information Exchange (HIE) on patient outcomes. Using a newly compiled database of state HIE laws as instruments for hospital HIE, we find that HIE significantly reduces mortality from infectious diseases and hospital readmission rates for common conditions. With HIE usage increasing by 50 percentage points from 2009 to 2019, we estimate this technology saved approximately 27,000 lives annually through improved care coordination and public health response.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162158</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Firms and Technology in Development Economics</title>
<link>https://hdl.handle.net/1721.1/162157</link>
<description>Essays on Firms and Technology in Development Economics
Houeix, Deivy
My thesis investigates the relationships between technology and firms in lower-income countries. I explore both the economic impacts of technology on firms—how it affects their economic outcomes and their relationships with stakeholders, both within and across firms—and the determinants of technology adoption: what underlying factors impede or drive the uptake of new technologies? I combine a diverse set of methods, including large-scale field experiments, economic theory, and long-term collaborations with local partners to investigate how small firms—such as taxis, retailers, and others—transform their practices and relationships as they adopt new technologies. My work centers on West Africa, one of the world’s poorest regions, where economic research remains limited. &#13;
&#13;
Chapter 1: The first chapter investigates the idea that digital technologies have the potential to increase firm productivity. However, they often come bundled with data observability, which can be a double-edged sword. Observability reduces information frictions and can increase efficiency, but some agents may lose their informational rent and thus resist adoption. I explore this trade-off between observability and adoption through two field experiments conducted over nearly two years. These experiments, guided by contract theory, introduce digital payments to the Senegalese taxi industry in partnership with the country's largest payment company. In the first experiment, I randomize access to digital payments for drivers (employees) and transaction observability to taxi owners (employers). I find that digital payments reduce drivers' cash-related costs by about half but also serve as effective monitoring tools for taxi owners. Transaction observability substantially increases driver effort, contract efficiency, and the duration of owner-driver relationships. However, 50% of drivers—primarily the worst-performing and poorest—decline to adopt digital payments when transactions are observable. The second experiment shows that the adoption rate doubles when drivers are assured that owners will not be able to observe their transactions. I develop a theoretical framework and use the experimental variations to estimate the welfare impacts of policy counterfactuals. I show that removing transaction observability would maintain moral hazard problems but broaden adoption and thus increase overall welfare—an approach ultimately implemented by the payment company. These findings highlight the crucial role of information embedded in digital technologies, as it magnifies gains for adopting firms but can deter initial adoption.&#13;
&#13;
Chapter 2: In the second chapter, I conduct a randomized experiment to study the nationwide technology diffusion of a new digital payment technology in Senegal. By leveraging two novel sources of network data—mobile money transactions and anonymized phone contact directories covering the near universe of the adult population in Senegal—I causally identify three sets of adoption spillovers from taxi firms randomized to receive early access to the technology: intra-industry among taxi firms; inter-industry between taxi drivers and other small businesses; and inter-regional spillovers from the capital city to businesses in other urban centers. I show that spillovers go beyond strategic complementarities, reflecting social learning within firms' social networks, driven by social ties and remote interactions.&#13;
&#13;
Chapter 3: In the third and final chapter, I explore the fact that search and trust frictions have historically made it hard for small firms in lower-income countries to buy inputs from foreign markets. The growth in smartphone ownership and social media usage has the potential to alleviate these barriers. Informed by a dynamic model of relational contracting, we run a field experiment leveraging these technological tools to provide exogenous variation in (1) search frictions and (2) trust frictions (adverse selection and moral hazard) in a large international import market. In our search treatment, we connect a randomly selected 80% of 1,862 small garment firms in Senegal to new suppliers in Turkey. We then cross-randomize two trust treatments that provide additional information about the types (adverse selection) and incentives (moral hazard) of these new suppliers. Alleviating search frictions is sufficient to increase access to foreign markets: in all treated groups, firms are 26% more likely to have the varieties a mystery shopper requests and the goods sold are 30% more likely to be high quality. However, the trust treatments are necessary for longer-term impact: using both transaction-level mobile payments data and a follow-up survey, we show that these groups are significantly more likely to develop the connections into relationships that persist beyond the study. These new relationships lead to increases in medium-run profit and sales. Finally, we use the treatment effects to estimate the model and evaluate counterfactuals where we set various combinations of the frictions to zero, finding that the largest gains come from eliminating adverse selection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162157</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Industrial Policy, Misallocation and Production Networks</title>
<link>https://hdl.handle.net/1721.1/162156</link>
<description>Essays in Industrial Policy, Misallocation and Production Networks
Garg, Tishara
The thesis comprises three chapters studying industrial policy, misallocation and macroeconomic propagation in developing countries. The first chapter studies the impact of placebased industrial policies on equilibrium selection with Indian industrial parks as the empirical context. The second and third chapters study firm networks in Chile and Turkey respectively— using a common theoretical framework, the former studies the incidence of distortions while the latter studies the propagation of a large refugee shock. The first chapter introduces a method to study the impact of policy events on equilibrium selection in settings where strong complementarities may lead to multiple equilibria and coordination failures. Many industrial policies are rooted in the idea of coordination failures and big-push’ theories, yet empirical evidence on their effectiveness remains limited, since distinguishing equilibrium shifts from direct changes in fundamentals is challenging. Leveraging tools from industrial organization and algebraic geometry, I develop an approach to study coordination effects without imposing strong assumptions on the distribution or responsiveness of economic fundamentals. The method identifies the ‘types’ of factual and counterfactual equilibria through a three-step procedure: model estimation and inversion, equilibrium enumeration, and type assignment. Types of factual equilibria may be used to examine how events, like urban infrastructure, subsidy drives, or trade liberalization, affect equilibrium selection. Types of counterfactual equilibria further allow decomposition of observed effects into fundamentals- versus coordination-driven. I apply this method to study industrial zones in India. Using a newly assembled dataset, I find that municipalities receiving an industrial zone see a 60% increase in non-farm employment over 15 years, with significant spillovers to non-targeted sectors and municipalities. Combining the methodology with event study designs, I find that industrial zones increase the probability of escaping a low-industrialization equilibrium by 38%, with coordination effects explaining roughly onethird of the observed change in outcomes. The second chapter (joint with David Atkin, Baptiste Bernadac, Dave Donaldson, and Federico Huneeus) combines unique datasets from Chile to quantify the full incidence of distortions for the first time. Economic distortions—such as market power, taxes, credit constraints, etc.—are fundamental in understanding the difference between developing and developed economies. Recent work has documented the pervasive extent of economic distortions and how they lead to substantial misallocation, or aggregate productivity loss. Far less well understood is how these phenomena affect members of society differently. We embed a new dataset which we build by linking workers and owners to firms, firms to each other, firms to consumers, and firms and consumers to the government, inside a general equilibrium model of the Chilean economy. Armed with internal estimates of distortions on exchanges throughout the economy, as well as data on the network of such linkages, we conduct a series of counterfactual simulations that illuminate the incidence of distortions in our model economy. We find that the burden of distortions falls relatively more on the shoulders of the poor, the young and women. The final chapter (joint with Ahmet Gulek) investigates how immigration-induced wage shocks can propagate beyond the regions receiving immigrants through the production network. Using the Syrian refugee crisis in Turkey as a quasi-experiment and the near universe of domestic firm-to-firm transaction data from VAT records, we show that the immigration shock propagates both forward and backward along the supply chain. Firms in non-host regions who directly or indirectly buy from host regions demand more labor. Firms who sell to host regions weakly increase their sales. Estimates imply an elasticity of substitution between labor and intermediate goods of 0.76 and an elasticity of substitution of nearly 1 between intermediates. Counterfactual analyses show that the spillover effects on non-host regions are economically meaningful when the host regions are central nodes of the domestic trade network. For example, a 1% increase in labor supply in Istanbul decreases real wages in Istanbul by 0.56% and increases real wages in the average non-host city by 0.38%.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162156</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Community Retrofit Trust: Incentivizing Deep Energy Retrofits in Massachusetts' Triple Deckers</title>
<link>https://hdl.handle.net/1721.1/162155</link>
<description>The Community Retrofit Trust: Incentivizing Deep Energy Retrofits in Massachusetts' Triple Deckers
Chuttani, Milan
To meet its 2050 net-zero carbon emissions goals, Massachusetts must rapidly retrofit its aging stock of three-story multi-family homes, also known as “Triple Deckers.” However, high upfront capital costs, disparities between subsidized gas and electric energy rates, complex eligibility criteria, and misaligned incentives for landlords and renters constrain the widespread adoption of deep energy retrofits (DERs) in small multi-family homes. &#13;
&#13;
Drawing on energy democracy and reparative planning theory, this thesis reframes Triple Decker retrofits as a pathway to social and spatial transformation that empowers residents through cooperative participatory processes. This project proposes a practical framework for a “Community Retrofit Trust” which uses systems of distributed energy savings, community ownership of DER assets, and cooperative governance to ensure tenants, building owners, and neighbors in environmental justice communities share benefits from DERs while maintaining rental affordability. A proposed values-based decision-making process also helps community cooperatives adapt the Retrofit Trust’s framework to their unique social contexts.&#13;
&#13;
Descriptive case studies of two community solar initiatives illustrate how cooperative approaches that build trust, bundle projects and local expertise, and expand opportunities for participation can efficiently distribute energy benefits across a community while increasing investment and lowering costs. A feasibility analysis of a Community Retrofit Trust in Boston examines the strengths, challenges, and contradictions of incentivizing Triple Decker DERs through a cooperative approach.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162155</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rebuilding Civic Infrastructure for Equitable Development: Intermediary Solutions for Transforming Resource-Extractive Economies in Rural Southwest Arkansas</title>
<link>https://hdl.handle.net/1721.1/162154</link>
<description>Rebuilding Civic Infrastructure for Equitable Development: Intermediary Solutions for Transforming Resource-Extractive Economies in Rural Southwest Arkansas
Bradford, Mo
Southwest Arkansas, a rural and mineral-rich region, is entering a new wave of resource-driven economic activity fueled by lithium extraction. While local leaders are pushing for rapid industry development to counter long-standing socioeconomic decline, this research asks a critical question: Can these pro-industry strategies truly deliver equitable and lasting public benefits, or will they repeat historical patterns of extraction that have sidelined local communities?&#13;
This study critiques neoliberal development schemes and neoconservative, sectionalist ideologies that deprioritize equity-driven agendas and prioritize deregulation and private sector efficiency, arguing that such approaches often weaken institutional civic organizing and reduce responsiveness to public needs. As an alternative, it proposes civic infrastructure as a strategic solution, one that strengthens the networks of community institutions, local governments, and intermediary organizations essential for advancing equity in extractive economies.&#13;
The research further explores the role of intermediary organizations in bridging institutional and capacity gaps in Southwest Arkansas. These organizations can support under-resourced communities by providing convening power, technical assistance, and financial resources. &#13;
Through policy analysis, case studies, and field interviews, this work examines how civic infrastructure and intermediary support can work together to shift economic development toward more just and inclusive outcomes in resource-extractive economies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162154</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>When Girls Just Wanna Have Fun, How Do They Go? A Mixed Methods Study of Nighttime Leisure Travel in Boston</title>
<link>https://hdl.handle.net/1721.1/162153</link>
<description>When Girls Just Wanna Have Fun, How Do They Go? A Mixed Methods Study of Nighttime Leisure Travel in Boston
Dy, Raelene Ina Bianchi Louise Mendez
When we think of urban living and its depictions in popular culture, many shows and movies depict characters in leisure activities, such as meeting friends, going on dates or pursuing hobbies, often at night. Despite the prominence of the night as a key theme in depictions of urban leisure, transportation planners have rarely focused on nighttime leisure travel as an area of intensive study beyond the lens of safety. This thesis investigates the nighttime leisure travel patterns of residents and students in Greater Boston through statistical analysis and data sculpture with a focus on how these vary by gender. To create a baseline understanding of travel patterns, I focused on the Boston Metropolitan Area and used the most recent version of the Massachusetts Department of Transportation’s Household Travel Survey from 2011. I limited my analysis to a fixed set of leisure activities during a fixed nighttime period to understand associated travel behaviors. I also implemented a data sculpture method to investigate how a subset of MIT students made decisions around their travel modes. I found that women travelled differently from men, in that they spent more time walking and were more likely to be passengers in a car. In contrast, men were more likely to be behind the wheel and travel further. Both men and women showed a preference for walking over all other modes when leaving an activity.  Together, these findings indicate that nighttime leisure travel is not a simple extension of daytime patterns. To better design nighttime transportation that accommodates gender differences, planners need to respond to the special qualities of the city after dark.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162153</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The influence of nutrient availability on tumor metabolism</title>
<link>https://hdl.handle.net/1721.1/162152</link>
<description>The influence of nutrient availability on tumor metabolism
Abbott, Keene Louis
Tumor growth and progression are profoundly influenced by nutrient availability in the tumor microenvironment (TME). Nutrient accessibility not only shapes cancer metabolism but also affects therapeutic responses, genetic dependencies, and metastatic behavior. This dissertation explores how nutrient availability modulates these cancer phenotypes. First, we examined how environmental nutrient levels influence the efficacy of drugs targeting metabolic enzymes, showing that their effectiveness varies under different nutrient conditions. We also found that the nutrient composition of the TME in solid tumors is primarily determined by the tissue of origin rather than by the tumor itself. By contrast, leukemia cells actively reshape their nutrient environment. Furthermore, we assessed the impact of physiological nutrient conditions on genetic dependencies, identifying numerous genes whose essentiality is dictated by nutrient levels and uncovering potential new therapeutic targets in leukemia. Finally, we established that single nutrients do not dictate metastatic site preference. Instead, metastatic growth is driven by a complex interplay among multiple nutrients in the microenvironment and the intrinsic properties of cancer cells. These findings provide critical insights into how the nutrient environment influences tumor metabolism.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162152</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Healthy Behavior: Essays in Health and Behavioral Economics</title>
<link>https://hdl.handle.net/1721.1/162151</link>
<description>Healthy Behavior: Essays in Health and Behavioral Economics
Shreekumar, Advik
These essays examine beliefs and decision-making in health settings, emphasizing the role of attention, information, and technology in shaping behavior. The first essay studies human error in chest x-ray interpretation, a common and consequential medical task. It casts radiologists as facing a classical decision-theory problem, derives a novel martingale test for optimal behavior, and implements this test through a prudent application of machine learning to anonymized health records from the Beth Israel Deaconess Medical Center. I find that 58 percent of radiologists make predictable mistakes when assessing cardiac health on chest x-rays. Roughly two thirds of errors are explainable as individual radiologists making inconsistent decisions, and one third reflect the possibility that algorithms detect novel or complex signals. The second essay studies app-based mindfulness meditation, which has grown popular due to claims about its effects on mental well-being, productivity, and decision making. We assess these claims an experiment with 2,384 US adults, randomizing access and usage incentives for a popular mindfulness app. App access improves an index of anxiety, depression, and stress at two weeks and four weeks, with persistent effects three months later. It also improves earnings on a focused proofreading task by 2 percent. The third essay studies a tradeoff governments face when making recommendations in an evolving crisis. We investigate the effect of taking an early position on how much people believe later recommendations, using an online experiment with 1,900 US respondents in early April 2020. We present participants with CDC projection about coronavirus death counts and randomize exposure to information that highlights how the President previously downplayed the threat. When the President’s inconsistency is salient, participants are less likely to revise their beliefs about death counts from the CDC projection. This aligns with a model of signal extraction from government communication, and has implications for changing guidelines in other settings. JEL Codes: D91, I12, C8
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162151</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Economic Reevaluation of Navi Mumbai and the Indian Satellite City</title>
<link>https://hdl.handle.net/1721.1/162150</link>
<description>An Economic Reevaluation of Navi Mumbai and the Indian Satellite City
Thomas, Archer
Navi Mumbai, a municipality in the Mumbai Metropolitan Region, is the largest satellite city project in India. Nevertheless, it has been seen within the planning discipline as underperforming its original ambitions. Drawing upon the goals enumerated in the city’s original development plan, this thesis proposes a series of quantitative metrics corresponding to said goals and then utilizes data drawn from surveys, censuses, official reports, financial statements, and remote sensing datasets to propose an updated evaluation of Navi Mumbai’s performance over the past half-century. This thesis argues that, contrary to earlier perceptions, Navi Mumbai has largely succeeded in fulfilling its ambitions, and that this can be attributed to shifting suburbanization patterns in India, the prescient decision to prioritize office-based service industries over manufacturing, and the ongoing reconfiguration of transportation and logistics networks within the Mumbai region. Reflecting on the history of urban and economic planning in India, this thesis then suggests the implications of Navi Mumbai’s apparent success for satellite city projects in India and across the Global South, focusing on questions of financing and governance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162150</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Private Sector in Public Transit: Evaluating Early US Experience in P3s</title>
<link>https://hdl.handle.net/1721.1/162149</link>
<description>The Private Sector in Public Transit: Evaluating Early US Experience in P3s
Farabow, Web
Problems in US public transit are well documented: transit providers struggle to develop new infrastructure, face high project costs and long implementation timelines, pursue designs that prioritize ease of delivery over value to the public, and struggle to sustain their operations. In response to these challenges, Public-Private Partnerships (“P3s” or “PPPs”) have been promoted as a way to deliver more infrastructure on faster timelines at lower cost and higher quality. As P3s have been increasingly considered for major transit projects, this thesis investigates their ability to deliver on promotional claims, and their ability to address key challenges in American public transportation.&#13;
&#13;
First, the thesis contextualizes contemporary P3s within a history of private sector involvement in US public transit. In addition to detailing how existing infrastructure came to be, this history intends to sharpen an understanding of contemporary P3s by considering how forms of private involvement have changed over time. It proceeds to develop detailed case studies for three major infrastructure projects that have proceeded under a P3 model: RTD’s Eagle P3 in Denver, Maryland MTA’s Purple Line in Southern Maryland, and Los Angeles Metro’s Sepulveda Transit Corridor Project. Combining historic research and contemporary case study analysis, the thesis seeks to understand the circumstances under which contemporary P3s have emerged, and to draw lessons from early experience.&#13;
&#13;
American transit providers have considered P3s for a variety of reasons, but have been primarily motivated by limited administrative and financial capacity, and by a perceived ability of private firms to deliver projects on faster timelines. Early P3s have facilitated provision, enabling projects that otherwise may not have been built, and have demonstrated their potential to ensure sustainable operations over long-term contract periods. But P3s have achieved mixed results in accelerating project timelines, and their ability to reduce lifecycle project costs remains unclear. While P3s seek to increase private involvement in transit provision, the model places a higher burden on upfront public planning compared to conventional delivery strategies. Public infrastructure owners can design P3s to leverage private sector resources and capacity, but the model comes with tradeoffs that should be carefully weighed against likely benefits. Ultimately, P3s can address a number of acute challenges in American public transit, but are unlikely to provide a workaround to fundamental political and financial challenges that limit transit development more broadly.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162149</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-Scale Optimization using Reinforcement Learning, Dynamic Programming, and Column Generation</title>
<link>https://hdl.handle.net/1721.1/162148</link>
<description>Large-Scale Optimization using Reinforcement Learning, Dynamic Programming, and Column Generation
Paskov, Alexander Spassimirov
One of the most enduring challenges in large-scale optimization is determining how to push the boundaries of scalability without compromising on performance or rigor. For decades, the exponential advances in computational power offered a straightforward solution: bigger problems could simply be tackled by bigger machines. However, in recent years, it has become increasingly apparent that pure computational force alone can no longer keep pace with the ever-growing complexity and scale of real-world applications. Additionally, despite the remarkable success of general-purpose methods for linear and integer optimization, these methods often struggle when confronted with domains that involve intricate dynamics, massive dimensionality, or a need for fine-grained sequential decisions. The simple question thus arises: can we design new optimization methods that scale more appropriately? In this thesis, we propose using dynamic programming, reinforcement learning, and column generation as a practical way to address this need across a variety of settings.&#13;
&#13;
We begin by developing and refining our methodology within the context of reinforcement learning and dynamic programming. We then move on to the application of column generation, and finally show how these techniques can be combined to supercharge fundamental machine learning methods with large-scale optimality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162148</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reparative Preservation through Immersive 3D Documentation: Cultural Memory, Spatial Justice, and Gullah Geechee Futures on Daufuskie Island</title>
<link>https://hdl.handle.net/1721.1/162147</link>
<description>Reparative Preservation through Immersive 3D Documentation: Cultural Memory, Spatial Justice, and Gullah Geechee Futures on Daufuskie Island
Jones, Wil
This thesis advances a reparative framework for cultural preservation by combining immersive documentation with co-authored digital storytelling to support Black spatial memory and community sovereignty. Grounded in fieldwork on Daufuskie Island, South Carolina—a historic Gullah Geechee community confronting dispossession and cultural enclosure—the project co-creates Daufuskie3D (https://daufuskie3d.org/), an interactive website that presents annotated 3D scans, oral histories, ambient videos, and symbolic interface design rooted in Gullah epistemologies.&#13;
&#13;
It is guided by two research questions: How can immersive documentation support reparative preservation for communities at risk of spatial erasure? And what frameworks—technical, ethical, and political—ensure digital practices reflect Black cultural values, descendant authorship, and community control? Drawing from Black geographies, wake work, vernacular cartography, and speculative design, the thesis introduces a conceptual distinction between visualization and analysis tools to examine how different modes of spatial capture shape visibility and authority. The project finds that immersive tools, when grounded in ethical design and descendant authorship, can function not simply as representational media but as reparative infrastructure—supporting visibility, stewardship, and spatial return in communities confronting erasure.&#13;
&#13;
The Daufuskie3D website serves as both platform and method. Its spatial interface draws on Gullah visual language, including Underground Railroad quilt codes and spiritual symbolism, while its non-linear navigation resists conventional heritage taxonomies. Rather than flattening culture into content, the site embraces ambiguity, withheld spatial detail, and narrative restraint as ethical design principles. Developed in partnership with Ms. Sallie Ann Robinson, a sixth-generation Gullah cultural steward, the project repositions preservation as participatory, situated, and future-facing. It offers Daufuskie3D as both a working prototype and a methodological contribution toward reparative immersive practice—centering digital preservation as a strategy of memory, sovereignty, and cultural regeneration within the Black diaspora.&#13;
&#13;
Keywords: Immersive Documentation, 3D Scanning / LiDar / Photogrammetry, Cultural Preservation, Gullah Geechee, Daufuskie Island, Reparative Preservation, Black Geographies, Digital Heritage, Speculative Design, Counter Cartography, Counterpublic, Spatial Justice, Oral History, Afrofuturism, Digital Public, Digital/ Web Archive, Cultural Stewardship, Ethical Design, Participatory Design, Underground Rail Road, Return
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162147</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a political economy of the power sector: green capitalism, eco-socialism, and co-operative power in decarbonized climate policy</title>
<link>https://hdl.handle.net/1721.1/162146</link>
<description>Toward a political economy of the power sector: green capitalism, eco-socialism, and co-operative power in decarbonized climate policy
Jin, Brooke
The political economy of the power sector has been characterized by a putative transition from fossil capitalism to green capitalism in an attempt to mitigate the worst effects of anthropogenic climate change on nature and society. In recent years the rise of green industrial policy, such as the passage of the Inflation Reduction Act of 2022, has sought to stimulate domestic economic development of green-technology projects and implement protectionist trade policies with the normative intent of protecting the geopolitical hegemony of U.S. industry. Yet the objectives of such industrial policies, which function less to reduce carbon emissions than to increase resource- and carbon-intensive consumption patterns, run antithetical to putative state objectives of the decarbonization of the power grid and industrial operations, and in fact green capitalism does not exist without the continued influence of fossil capital.&#13;
In this thesis I look to Marxist theories of the state, capital, labor, and nature to illustrate the crises of capitalism that have been occurring due to the exponential increase in power demand by data centers and large technology companies. In reshaping the governance of power markets, electricity generation, and transmission and distribution infrastructure through this increase in demand, called load growth, I show the illusion of sustainability under a green-capitalist political economy that purports to advance decarbonization goals, yet which in actuality facilitates conditions for the centralization and monopolization of private capital, as well as the continued destruction of nature and exploitation of workers. However, this crisis of load growth and the issue of governance that it raises open a window for experimentation into new state systems, socialized modes of production, and labor and environmental solidarity in the creation of a new climate policy: one that prioritizes equity, welfare, ecological preservation, and a truly decarbonized society. I propose a socialization of the power sector to increase community autonomy over their energy needs and to begin to dismantle the technocratic influence of fossil-fuel and large technology companies over electricity generation and access.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162146</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonization at the Neighborhood Scale: &#13;
Challenges, Learnings and Opportunities in an Emerging Model</title>
<link>https://hdl.handle.net/1721.1/162145</link>
<description>Decarbonization at the Neighborhood Scale: &#13;
Challenges, Learnings and Opportunities in an Emerging Model
Cina-Sklar, Zoë
Decarbonizing residential buildings in the United States is critical for reaching climate goals and has significant public health and energy justice benefits if accessible to all. To date, building electrification has been individual-level and market-driven, with some financial incentives at the state and federal level. This model is generally inaccessible to low-income homeowners and renters who are unable to afford the upfront costs of building improvements and new electric appliances. Neighborhood-scale building decarbonization has been proposed as an alternative in which new developments would be built allelectric or existing buildings would be electrified at the block or neighborhood scale. In the latter use case, neighborhood-scale building decarbonization is often tied explicitly to decommissioning gas lines. Specifically, proponents posit that these projects could be funded through avoided gas line repair and replacement costs. Investor-owned utilities are seen by some experts in the space as key to the success of neighborhood-scale building decarbonization because of their financing capabilities and existing role in providing heating and/or electric service to customers. In recent years, a number of state policymakers have passed legislation approving utility-funded neighborhood-scale building decarbonization and state utility commissions have promulgated regulations approving cost recovery for these projects. Utilizing desk research and informant interviews, this paper analyzes what has enabled and hindered existing utility-funded neighborhood-scale building decarbonization pilot projects in California, Massachusetts, and New York. I diagnose strong and specific climate goals, the passage of enabling legislation, an engaged state utility commission, and strong advocacy ecosystems as key factors for initiating neighborhood-scale pilot projects. Through informant interviews, I identify costs, financing, community buy-in and planning as central determinants for the success of pilot projects and the future of the model. I close by offering recommendations and outstanding research areas for planners interested in pursuing future neighborhood-scale building decarbonization projects.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162145</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can planning, a tool for colonization, be decolonized?&#13;
MIT’s funding at the expense of Indigenous Peoples through the Morrill Act</title>
<link>https://hdl.handle.net/1721.1/162144</link>
<description>Can planning, a tool for colonization, be decolonized?&#13;
MIT’s funding at the expense of Indigenous Peoples through the Morrill Act
Barrera Gonzalez, Devora
This thesis questions whether planning and the activities the profession’s umbrella covers are beneficial or harmful. The project analyzes the role of planning in the colonization of Turtle Island by materializing and legitimizing the seizure of Indigenous Land through planning practices like urbanization, enclosure, the creation of Indian reservations, and tools like cartography, lawfare, and landscape architecture and design. I make an argument in this thesis about how there is no such thing as sustainable or beneficial urbanization because urbanization equals death, that planning is inherently harmful because it was born as a tool of colonization, and that there is no way to decolonize the profession, given that the profession upholds the current land system, I make an argument that the only solution to reverse and undo the harm done by planning and urbanization is to give Land Back to Indigenous Peoples. For this, building my argument, I will walk you through the narrative built to dispossess land, the concept of imaginary geography, how planning enabled and legitimized diferent ways for land dispossession, and finally, the modification of land (urbanization). A chapter is dedicated to looking closer at one piece of lawfare in particular: the morrill act, revealing the history of the foundation of MIT at the expense of Indigenous Peoples, the role that universities play in the maintenance and strengthening of the systems of oppression in place. Using that information to answer the calls for decolonization of the profession, this thesis makes an argument and underscores that, given that planning is born as a tool for colonization, the profession can’t be decolonized and demands Land Back as the only solution. The thesis presents the information on two parcels that belong to the Confederate Tribes of Coos, Lower Umpqua, and Siuslaw Indians, located in the state of Oregon, that were seized and, through the morrill act, resold with the proceeds benefitting MIT, calling for the restitution of the parcels and giving Land Back.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162144</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equity and Climate Resilience in Bogotá's Public Space Policy: A Critical Policy Review</title>
<link>https://hdl.handle.net/1721.1/162143</link>
<description>Equity and Climate Resilience in Bogotá's Public Space Policy: A Critical Policy Review
Duque Añez, Silvia
In Bogotá, where long-standing spatial and social inequalities intersect with growing climate risks, public space policy holds the potential to either reinforce exclusion or promote resilience and justice. Decisions about parks, plazas, and green corridors are not neutral; they reflect political priorities, embedded values, and power dynamics. This thesis asks: To what extent, and in what ways, does Bogotá’s public space policy framework incorporate criteria of equity and climate resilience? Through this question, the research examines how policies define and implement these concepts, what types of interventions they promote, and what limitations may emerge.&#13;
While prior research has emphasized the importance of inclusive and adaptive public spaces, there is limited analysis of how these principles are embedded in policy instruments in Latin American cities. Addressing this gap, this thesis develops an analytical framework informed by literature on urban environmental justice and climate adaptation. This framework serves as both an evaluative tool and a resource for policymakers seeking to move beyond vague commitments and toward actionable pathways for equity and climate resilience. &#13;
The framework is used to analyze two key policy instruments: the District Public Space Policy (Política Pública Distrital de Espacio Público 2019-2038) and the Master Plan (Plan de Ordenamiento Territorial: Bogotá Reverdece 2022-2035). The evaluation reveals that both perform well, reflecting a genuine political effort to prioritize these issues. However, the findings also show that narrow or inconsistent interpretations of equity and climate resilience can lead to unintended consequences, and that significant implementation challenges remain. By grounding its analysis in a Global South context, this thesis contributes to international conversations on urban sustainability, offering both a critical lens and a practical tool. Ultimately, this research advocates for a shift in public space governance, one that treats equity and resilience not as aspirational ideals, but as measurable, structural commitments to a more just and climate-ready urban future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162143</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interest Group Politics in U.S. "Social Housing" Experiments</title>
<link>https://hdl.handle.net/1721.1/162142</link>
<description>Interest Group Politics in U.S. "Social Housing" Experiments
Davidson, Zak
The rising cost of housing has renewed interest in public sector-led models of mixed-income housing production. Advocates, local governments, and state lawmakers are exploring strategies to involve the public sector more directly in the residential development process by capitalizing revolving loan funds, leveraging public land, and creating new public authorities. While a universal definition for “social housing” remains elusive, most policymakers and supporters agree that social housing is permanently affordable for economically and racially diverse households and includes elements of resident self-governance. This research analyzes how key interest groups—including affordable housing developers, tenant advocates, labor unions, market-rate developers, and pro-housing coalitions—shape and respond to emerging social housing initiatives. Drawing on interviews and case studies of Seattle, Montgomery County (MD), California, New York, Atlanta, and Chattanooga between 2019 and 2025, this thesis examines how political context, institutional constraints, and coalition dynamics influence how social housing proposals are framed, negotiated, and either supported or resisted by key stakeholders. Four key themes emerge from these case studies. First, existing affordable housing developers often interpret new mixed-income, permanently affordable proposals as competition, particularly amidst resource scarcity and institutional constraints. This constitutes a substantial roadblock for the social housing movement. Second, proponents’ theory of change, initiative branding, and their ability to participate in multi-issue bargaining notably impact how affordable housing interest groups respond. Third, private sector actors’ support appears dependent on the public sector’s willingness to partner and how proponents describe the problem they are solving. Fourth, while collaborations around social housing may trigger fault lines between YIMBYs and tenant justice groups regarding revenue neutrality and the value of new market-rate supply, social housing represents an opportunity for bridge-building and collaboration across the housing movements. As interest in these models grows, this research offers practical insights for advocates and policymakers seeking to design locally tailored, politically viable approaches to public-led, mixed-income housing production.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162142</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shifting Spaces: Housing and Urban Change in Kabul</title>
<link>https://hdl.handle.net/1721.1/162141</link>
<description>Shifting Spaces: Housing and Urban Change in Kabul
Ghanizada, Bibi Khadija
This thesis explores the evolution of Kabul’s housing landscape with a focus on the emergence of Shahraks (planned townships) after 2001. Drawing on historical research, four case studies (Aria City, Khwaja Rawash Township, Khushal Khan Mena Blocks, and Omid-e-Sabz Township), and interviews with residents and experts, it analyzes how Shahraks have reshaped urban development in a rapidly growing city. Inspired by Soviet-era Mikrorayons, Shahraks introduced formal infrastructure, legal recognition, modern amenities, and opportunities for new economic activity. They helped expand Kabul’s formal housing stock and created pockets of urban community identity. However, the research finds that Shahraks also deepen spatial and socioeconomic inequalities. Largely built through private investment and targeting wealthier residents and civil servants, they remain inaccessible to the majority of Kabul’s population. Many Shahraks were developed on contested or illegally grabbed land, raising concerns about&#13;
tenure security and governance. Despite improved infrastructure compared to informal settlements, Shahraks often suffer from poor climate responsiveness, environmental degradation, limited green spaces, and energy-intensive designs. Their weak integration with Kabul’s broader urban fabric further exacerbates issues of spatial fragmentation. Looking ahead, the thesis argues that Kabul must learn from both the achievements and shortcomings of Shahraks as it plans&#13;
future projects like Kabul New City. Their model is not inherently unsustainable or inaccessible, but without deliberate reforms, Kabul risks reproducing a cycle where contemporary urban development becomes synonymous with exclusion, fragmentation, and missed opportunity. Key recommendations include prioritizing affordable and expandable housing models, enforcing transparent land governance, promoting climate-adaptive design, strengthening connections&#13;
between housing and employment centers, and carefully structuring public-private partnerships to align private investment with public goals. As Kabul embarks on projects like Kabul New City, it must learn from the partial successes and profound shortcomings of past developments.&#13;
The challenge is not simply to build new cities, but to build a more inclusive, adaptable, and sustainable urban future for all Kabulis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162141</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of phage detection by bacterial innate immune proteins</title>
<link>https://hdl.handle.net/1721.1/162140</link>
<description>Mechanisms of phage detection by bacterial innate immune proteins
Zhang, Tong
Bacteria are under constant threat from their viral predators, known as bacteriophages (or phages). As a result, bacteria have evolved diverse immune mechanisms to protect themselves from phage infection, such as restriction modification, CRISPR-Cas, and abortive infection (Abi) systems. Because Abi systems function through killing infected cells to protect the bacterial population, they must stay inactive prior to infection, but rapidly detect phages and promptly trigger an immune response. Although many novel Abi systems have been discovered in recent years, how they detect phage infection remains poorly understood. Here, I demonstrated that CapRel_SJ46, an anti-phage protein from E. coli, senses phage infection by directly binding to the newly synthesized major capsid proteins (MCPs) of certain phages. Binding to the MCPs releases autoinhibition of the CapRel_SJ46 toxin domain, enabling it to pyrophosphorylate tRNAs, which blocks translation to restrict viral infection. Detection of the MCPs is analogous to how eukaryotic innate immune systems detect foreign invaders through conserved pathogen-associated molecular patterns (PAMPs). In addition to the MCPs, I found that CapRel_SJ46 can directly bind to another unrelated and structurally different phage protein, called Gp54. Bas11 phages harbor two trigger proteins, and both are sensed by CapRel_SJ46 during infection, indicating that a bacterial immunity protein can sense more than one phage-encoded trigger. Additionally, I demonstrated that another CapRel homolog, CapRel_Ebc, senses the inhibition of a host cell division protein by the phage-encoded trigger, which is analogous to effector-triggered immunity in eukaryotes, where innate immune proteins sense virulence-associated activities of pathogens rather than directly sensing PAMPs. Lastly, I characterized another Abi system, named RAZR (ring-activated zinc-finger RNase), and showed that RAZR forms a ring-shaped supramolecular complex of over 1 MDa upon sensing a phage-encoded PAMP, leading to activation of its RNase activity to restrict phage infection. This finding highlights the importance of higher-order molecular assembly in bacterial innate immunity. Collectively, my thesis work has provided new insights into the molecular mechanisms by which bacterial innate immune systems detect phage infection.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162140</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Theory and New Practical Methods for Solving Large-Scale Linear and Conic Optimization</title>
<link>https://hdl.handle.net/1721.1/162139</link>
<description>New Theory and New Practical Methods for Solving Large-Scale Linear and Conic Optimization
Xiong, Zikai
In the last several years there has been a dramatic shift in the way many large-scale linear programs (LPs) are solved in practice, with classic methods (simplex method and interior-point methods) being replaced by the primal-dual hybrid gradient method (PDHG) to solve large-scale LP problems. While PDHG---with heuristic enhancements and GPU implementation---has been very successful in solving large-scale LP problems, its performance can have substantial variance and an intuitive understanding of the drivers of its performance has been lacking.  In this context the research in this thesis has three related goals: (i) the development of new theory to explain the performance of PDHG for large-scale LPs, (ii) the development of new practical methods for solving large-scale LP problems based on PDHG, and (iii) the generalization of such new theory and new practical methods to the more general class of conic optimization problems.&#13;
 &#13;
The thesis is organized as follows. Chapter 1 is an introduction and a unified summary of the thesis research as a whole.  Chapter 2 presents computational guarantees for PDHG for solving LP problems based on two instance-dependent natural geometric condition measures, namely the "limiting error ratio" and the "LP sharpness." The connection between these condition measures and other LP condition numbers is also developed. Chapter 3 presents computational guarantees for more general conic optimization problems using the geometry of the primal-dual (sub)level sets.  Based on our analysis we propose a central-path Hessian-based rescaling to enhance algorithmic performance by improving the (sub)level set geometry. We present computational results that show the potential of our methodology to improve the performance of PDHG in practice. Chapter 4 presents a closed-form expression of the iteration complexity of PDHG for LP instances with unique optima. The iteration bound has a reciprocal relationship with (i) stability under data perturbation, (ii) proximity to multiple optima, and (iii) LP sharpness. Chapter 5  considers the iteration complexity of LP instances under a sub-Gaussian model of instance generation.  In this model we show that PDHG is a polynomial-time algorithm with high probability.  This result partially shrinks the gap between theory and practice for PDHG by showing that PDHG can solve "most" LP instances in polynomial time. Finally, Chapter 6 presents a practical PDHG-based large-scale conic optimization solver with GPU enhancements.  In this chapter we present computational experiments that show that the solver is more efficient than other first-order methods and commercial solvers for large-scale conic optimization problems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162139</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Informing Public Health Policy Design and Operations with Analytics: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/162138</link>
<description>Informing Public Health Policy Design and Operations with Analytics: Methods and Applications
Zerhouni, El Ghali Ahmed
Data-driven approaches hold immense potential for improving public health decision-making in complex, uncertain, and high-risk environments. Yet, there are several key challenges that stand in the way of successfully leveraging large-scale, heterogeneous, and noisy data into timely and actionable policy insights. These challenges are particularly pronounced when conventional modeling tools fall short, for instance, in settings where health risks arise from infection propagation in intricate supply chains, rapidly mutating pathogens, or delayed and fragmented surveillance systems. This thesis introduces a suite of novel methodologies and use cases at the intersection of operations research, epidemiology, and machine learning to address some of these challenges and support more informed, timely, and proactive public health decisions.&#13;
&#13;
A central focus of this thesis is the management of health risks related to zoonotic viruses, which are pathogens that emerge in animals and can potentially jump to humans, then further evolve to become transmissible between humans. These viruses pose a growing global health threat. Notably, outbreaks of zoonotic viruses frequently emerge in live animal markets in developing countries, even when infection rates in the upstream farms supplying these markets remain consistently low. Motivated by this empirical observation, the first chapter of this thesis develops an innovative epidemiological model called the Transmission, Interaction, and Persistence (TIP) model. This model integrates stochastic supply chain dynamics and environmental transmission mechanisms, and sheds light on how market-level factors amplify the risk of infection outbreaks. It yields actionable insights regarding the potential effectiveness of risk mitigation strategies such as frequent market sanitation and supply consolidation.&#13;
&#13;
Since March 2020, the world has experienced multiple waves of infections caused by the SARS-CoV-2 virus. Similar to past pandemics, SARS-CoV-2 has spread in waves, each driven by different genetic variants of the virus. Public health agencies have often struggled to predict in advance which variants would drive a new wave of infections. The second chapter of this thesis introduces an AI-enabled early warning system for emerging viral variants. The newly developed predictive model incorporates genetic and epidemiological features and is trained and tested on over 9 million sequenced SARS-CoV-2 variants across 30 countries. It accurately predicts whether each new variant will drive a significant wave of infections within the following 3 months.&#13;
&#13;
There is ample biological and empirical evidence regarding the roles of mutating variants and population immunity in driving infection waves of respiratory viruses. Motivated by this, the third chapter of the thesis develops the first epidemiological model, called the Immunity-Variants-Epidemic (IV-Epidemic) model, that explicitly captures circulating variants and the evolving population immunity profile to more accurately reflect the long-term trajectory of variant-driven pandemics. It incorporates variant evolution and population immunity dynamics, and is able to replicate the observed multi-wave infection patterns without requiring ad hoc recalibration.&#13;
&#13;
The fourth chapter of the thesis focuses on post-marketing pharmacovigilance, which is key to drug safety regulatory work. It presents PR1SM (Patients Really are 1st in Signal Management), an AI-based framework for identifying potential drug safety signals using post-marketing surveillance data. By structuring adverse event reports into parallel time series at multiple levels of clinical aggregation and adjusting for exposure trends, PR1SM complements standard disproportionality methods to detect safety signals earlier and with greater sensitivity in both real-world and synthetic settings.&#13;
&#13;
Collectively, the chapters of this thesis demonstrate how operations research can be combined with domain-specific methods in biology, epidemiology, and pharmacovigilance to inform data-driven public health strategies. The proposed analytical frameworks offer interpretable, scalable, and policy-relevant tools to create more resilient public health systems.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162138</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>When Public Space Goes Digital: Rethinking Urban Planning with Insights from Letra Ese</title>
<link>https://hdl.handle.net/1721.1/162137</link>
<description>When Public Space Goes Digital: Rethinking Urban Planning with Insights from Letra Ese
Chiappero, Sofia Belen
Digital public spaces have become vital for organizing, belonging, and community-building, particularly for marginalized groups such as the LGBTQ+ community, who are increasingly excluded from both physical and online public spaces. Yet, the design of these digital spaces is largely shaped by profit-driven interests rather than the needs of the communities that rely on them. This thesis addresses this gap by asking: What if we treated digital spaces with the same care and intention we demand from our physical public spaces?&#13;
&#13;
To explore this question, the thesis brings together frameworks from urban planning, LGBTQ+ advocacy, and digital design. It proposes a reframing of “urban planning” to include “digital urban planning,” grounded in principles of rights, care, safety, and collective memory. Through a feminist urbanist lens and systems thinking, the work challenges the separation between physical and digital cities.&#13;
&#13;
Methodologically, the project moves beyond traditional research approaches, incorporating Conversational Design and the Relational User Framework to co-create knowledge with activists. The resulting contributions include both a prototype and a roadmap for a digital public space that supports and amplifies LGBTQ+ advocacy; not as a technical fix, but as a speculative and participatory framework for reimagining digital public infrastructure.&#13;
&#13;
This research is grounded in a case study of Letra Ese, an activist-led LGBTQ+ organization in Mexico. The case illustrates how such groups navigate systemic neglect while leveraging technology to document violence and sustain community. Ultimately, the thesis offers a starting point for rethinking the design of digital public spaces and argues for the inclusion of digital environments within the domain of urban planning, recognizing that for many, especially marginalized communities, much of life is already lived online.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162137</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flooding as Remembering: A Trickster’s Guide to Fugitive Ecology, Revolutionary Recall, and Speculative Worldbuilding Beyond the Plantationocene</title>
<link>https://hdl.handle.net/1721.1/162136</link>
<description>Flooding as Remembering: A Trickster’s Guide to Fugitive Ecology, Revolutionary Recall, and Speculative Worldbuilding Beyond the Plantationocene
Delaney, Simone Hope
Since the early days of conquest, Black, Indigenous, and Afro-Indigenous peoples of the Lower Mississippi River Delta have survived recurrent processes of settler colonial un-worlding by re-worlding sovereign lifeways rooted in reciprocal relationships to other colonized peoples and the environment. Un-worlding occurred to Black and Indigenous peoples through dispossession of land, capture into enslavement, and genocide. This process was intertwined with the un-worlding of the landscape’s agency, which was captured and enclosed into property by arresting waterways’ movements through constrictive engineering using coercive labor. In the Bas de Fleuve swamps (today known as the Louisiana Central Wetlands), self-emancipated fugitives that had escaped enslavement formed autonomous inner worlds in the unenclosed territories between the Mississippi River and Lake Borgne. Known as Maroons, they were led by a leader named Juan San Malò and forged interdependent networks that extended to Indigenous settlements, enslaved Africans on plantations, and free Blacks in New Orleans. By living outside eurosettler logics of property and re-establishing reciprocity with the more-than-human web of life, they demonstrated that the liberation of captive people is bound to the liberation of captive landscapes. Their re-worlding was also reminiscent of the pan-African trickster figure: anarchistic heroes that overturn the dominant oppressive world order for more liberatory realities Today, the destruction of wetlands across Southeast Louisiana means that descendants are facing an un-worlding of the sovereign livelihoods their ancestors re-established generations before. This is due to anthropogenically induced land loss, flooding, storm surge, and saltwater intrusion influenced by extractivist industries. Through revolutionary recall, reclaiming the logics of re-worlding established by Juan San Malò’s band of Maroons offers pathways to resist the intensifying threats of climate change that represent afterlives of slavery. Common Ground Relief is one collective that has drawn from Maroon legacies to lead bottom-up disaster response, mutual aid initiatives, and citizen-led wetland restoration. Drawing from creative land reclamation projects led by Utē Petit, Monique Verdin, the Nanih Bvlbancha Builders, and the Descendants Project, a constellation of small, site-specific projects are also presented to demonstrate how revolutionary recall can become a form of speculation for broader land-based liberation in the Lower Mississippi Delta.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162136</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Mass Timber Adoption in Greater Boston, Massachusetts: A Practical Study for Local Real Estate Developers</title>
<link>https://hdl.handle.net/1721.1/162135</link>
<description>Accelerating Mass Timber Adoption in Greater Boston, Massachusetts: A Practical Study for Local Real Estate Developers
Cerny, Faith W.
Today’s real estate development strategy must incorporate decarbonization to mitigate the built environment’s detrimental impact on climate change. Beyond required climate action, developments are increasingly seen as responsible for improving occupant health and wellbeing. Furthermore, industry stakeholders are tasked with efficiently delivering sustainable, high quality, and affordable housing in dense, urban areas to meet a growing demand. As the stakes intensify and demands of real estate development increase, projects face multiple barriers to implementation. This thesis explores mass timber construction as a viable solution to modern development challenges. While research content derives from multiple geographies within North America, a particular focus on the relevance and utility for Greater Boston, MA, USA is maintained. The thesis comprises five chapters. Following an introduction, the second chapter provides an overview of mass timber as an evolving building technology with an emphasis on how and why it is gaining momentum as a viable and preferred alternative to traditional building materials. The section conversely discusses commonly cited drawbacks delaying industry acceptance. The third chapter explores mass timber adoption at multiple scales, including studies of innovative projects proving achievement of development objectives despite challenges. Guided by insights from interviews, this chapter discusses stakeholders’ current understanding of the material and motivations for its use, perceived feasibility constraints as well as believed opportunities for its incorporation and proliferation, with a focus on Greater Boston. The fourth chapter considers methods to accelerate the rate of mass timber adoption, including facilitation of local development strategy. The section builds on research and interview findings to establish key considerations when evaluating a mass timber project and to propose an analytical framework for real estate developers to holistically assess the value of incorporating the material in their projects. The concluding chapter speculates the local arc of adoption and subsequent impacts of widespread mass timber project implementation for the city and region.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162135</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Micromobility in New York City: An Examination of Vehicle Type Use and User Behavior in Protected Bicycle Facilities</title>
<link>https://hdl.handle.net/1721.1/162134</link>
<description>Understanding Micromobility in New York City: An Examination of Vehicle Type Use and User Behavior in Protected Bicycle Facilities
Boeri, Jake
A shift towards the use of micromobility vehicles (MMVs), specifically motorized two-wheeled vehicles in urban mobility networks, has gained significant attention over the past decade. Many have commented on a perceived increase in MMV use in New York City (NYC) in particular, a trend that appears to have accelerated in the wake of the COVID-19 pandemic and in response to the expansion of high-quality bicycle facilities across the city. However, the extent to which different types of MMVs are used and related rider behavior is poorly understood, forcing policymakers, planners, elected officials, and community members to develop policies and infrastructure with inadequate information. Through direct observation of 9,629 vehicles across five locations, this thesis provides a degree of ground truth and an initial understanding of the prevalence of different MMV types used in protected bicycle facilities in NYC and related user behavior, including commercial application of these vehicles, helmet use, and passenger presence. The findings of this study point to a surprisingly high use rate of motorized MMVs in protected bicycle facilities in NYC, with motorized vehicles comprising nearly three-quarters (73.96%) of all vehicles observed. E-bikes were the largest class of vehicles observed (63.85%), followed by conventional, non-motorized bicycles (25.76%), e-scooters (6.69%), and mopeds (1.96%). Commercial-use vehicles made up nearly one-quarter (23.20%) of observations. A very small proportion of observations were cargo vehicles (2.89%), indicating their limited use for both personal and commercial purposes. Users were significantly more likely to wear a helmet when using a non-motorized vehicle than a motorized one, with helmet use varying substantially across vehicle classes. Modal split of MMV types, commercial use, and cargo vehicle use varied by both location and time of day, pointing to uneven distribution across the mobility network. There were substantial differences between the manual count from this study and automated bicycle counts generated by the New York City Department of Transportation over the same period, indicating a systemic undercounting of MMV use by the automated count system. In response to these findings, a series of recommendations are provided for how NYC and other cities with both developed and developing MMV networks can promote and guide safe, equitable, and sustainable mode shift as micromobility use expands. These&#13;
proposals include policy and spatial planning improvements that should be part of a response to widespread MMV adoption, and the ongoing transformation of how protected bicycle facilities are used.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162134</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Housing in European Metropolises: supply dynamics and planning frameworks in large Urban Areas of the EU</title>
<link>https://hdl.handle.net/1721.1/162133</link>
<description>Housing in European Metropolises: supply dynamics and planning frameworks in large Urban Areas of the EU
Berra Sandin, Mikel
Europe’s housing affordability crisis presents significant territorial challenges, particularly as housing demand increasingly spills over from inner cities to surrounding municipalities at the metropolitan scale. This study addresses key policy questions regarding the coordination of housing supply and planning instruments in large urban areas of the European Union. &#13;
Focusing on 23 large Functional Urban Areas (FUAs), the research follows a three part approach: a quantitative analysis of municipal-level housing production and demographic growth between 2011 and 2021 based on Census data; an analysis of the effects of housing supply on housing prices; and an AI-powered quantitative examination of urban plans, at municipal, metropolitan, and regional scales to observe whether they establish housing supply goals. This methodology generates evidence on the spatial dynamics of housing development, by creating an EU-wide database at municipal granularity, while providing a novel focus and analytical approach to institutional urban plans as drivers of housing supply.&#13;
Findings prove mixed alignments between housing supply and demographic growth, with Southern and coastal urban areas falling short on housing supply. In most cases, there is a pronounced metropolitan effect, where peripheral municipalities experience larger housing and population growth. When analyzing the plans, more frequent planning relates to larger housing provision. In addition, the research highlights that housing goals are usually determined at local plans, showing a mismatch between planning efforts and housing dynamics, which tend to be metropolitan or regional. Therefore, the research deepens the understanding of European housing provision and the planning of urban territories, highlighting the need for stronger housing policy mechanisms at the metropolitan level.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162133</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Miles Matter: Demographics, Distance, and Decision-Making</title>
<link>https://hdl.handle.net/1721.1/162132</link>
<description>Miles Matter: Demographics, Distance, and Decision-Making
El-Sisi, Kareem H.
In this thesis, I investigate which variables have the strongest influence on an individual's travel mode choice depending on the purpose and level of urgency (leisure, essential, emergency) of the trip. I analyze the relationship between spatiotemporal costs conditioned by demographic segmentation using data on population mobility patterns in auto-centric Los Angeles and multimodal New York City. Through a synergistic three-pronged methodology consisting of spatial (time and distance analysis complemented by a spatial interaction model), statistical (multinomial logistic regression model), and machine learning-based (graph neural networks and extreme gradient boosting) analysis, I explore the multifaceted nature of decision-making processes in different urban environments. The hidden patterns revealed by artificial intelligence show that distance is the key determinant of mode choice, depending on the urban form of the city and its adaptation to multimodality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162132</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Producing a Black Oeuvre: Narratives of Black Grassroots Cultural Organizing in Boston</title>
<link>https://hdl.handle.net/1721.1/162131</link>
<description>Producing a Black Oeuvre: Narratives of Black Grassroots Cultural Organizing in Boston
Hunsen, Alula
Amidst a bevy of nonprofits and governmental actors that support and facilitate cultural and aesthetic production in the City of Boston, a vanguard of Black artists and cultural organizers are developing structures and organizations to help local members of Boston’s Black communities steer their own cultural production. This thesis develops an understanding of actions being taken by these organizers and organizations through interviews, and builds a set of participatory action research frameworks by partnering with these organizations (specifically: Thrill, Black Cotton Club, and 5Thou), to conduct further research as to how Black Bostonians can continue to self-determine in the realms of arts and culture. Drawing from a lineage most directly traceable to the Black Arts Movement of the late 1960s, and to hip-hop cultural production in ensuing decades, these organizers are furthering Black-led, community-controlled arts, and fostering community-building. Borrowing theorist Henri Lefebvre’s conception and declaration of a right to creative expression and participation, characterized as oeuvre and as a critical aspect of a “right to the city,” I hypothesized that these actions toward cultural self-determination could be seen as the establishment of a Black oeuvre. This assertion was expanded upon by research partners, to include a broader array of strategies and conceptual frameworks for producing Black place, community, and culture in Boston.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162131</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Envisioning Regional Futures in Southeast Los Angeles: Understanding Barriers to Implementing Transit-Oriented Communities along the Forthcoming Southeast Gateway Line</title>
<link>https://hdl.handle.net/1721.1/162130</link>
<description>Envisioning Regional Futures in Southeast Los Angeles: Understanding Barriers to Implementing Transit-Oriented Communities along the Forthcoming Southeast Gateway Line
Martinez, Alejandra A.
The first 14.5-mile phase of the Southeast Gateway Line (SEGL), a planned light rail project through Southeast Los Angeles and the Gateway Cities region, is expected to be completed by 2035. The rail line aims to improve transit access while being complemented by a regional planning framework and station area planning that seeks to promote transit-oriented communities around station areas and drive equitable community development along the corridor. However, it remains uncertain whether the frameworks and governing bodies responsible for implementing the rail project, including the Los Angeles County Metropolitan Transportation Authority (LA Metro), the Gateway Cities Council of Governments (GCCOG), and cities along the corridor, will effectively align the transit investment with these land use and development goals.&#13;
&#13;
Given these uncertainties, this thesis focuses on the Southeast Los Angeles (SELA) subregion, where a history of structural challenges underscores both the urgency and the complexity of realizing visions for transit-oriented communities tied to the forthcoming rail investment. Drawing on semi-structured interviews with LA Metro and GCCOG staff, along with officials and staff from cities hosting future stations, this research explores the emerging political, economic, and structural barriers to implementing transit-oriented land use around two future SEGL stations: Florence/Salt Lake Station in Huntington Park and Firestone Station in South Gate, both stations of which have multi-jurisdictional spheres of influence. This thesis also proposes a collaborative framework that encourages SELA stakeholders to engage in incremental, low-stakes planning and establish accountability mechanisms before the rail arrives, laying the foundation for sustained stewardship over the vision of transit-oriented communities and broader equitable community development goals throughout the rail's lifespan.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162130</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation and activity of the 5-methylcytosine DNA glycosylase ROS1 contributes to DNA methylation patterning across development</title>
<link>https://hdl.handle.net/1721.1/162129</link>
<description>Regulation and activity of the 5-methylcytosine DNA glycosylase ROS1 contributes to DNA methylation patterning across development
Hemenway, Elizabeth A.
DNA methylation patterning is a consequence of opposing activities of DNA methyltransferases and DNA demethylases. A 5-methylcytosine DNA glycosylase, ROS1, removes DNA methylation from the Arabidopsis genome. In flowering plants, two distinct female gametes, the egg cell and the central cell, are fertilized, producing what will become the embryo and the endosperm of the seed. In Arabidopsis, a 5-methylcytosine DNA glycosylase, DME, demethylates regions in the central cell genome, leading to methylation differences between maternally- and paternally-inherited endosperm genomes after fertilization. DME is required for endosperm gene imprinting. Homologues of DME include ROS1, DML2 and DML3. It is unknown whether any of these DNA glycosylases are required for endosperm methylation patterning. We show that ROS1 prevents hypermethylation of paternally-inherited alleles in the endosperm at regions that lack maternal or paternal-allele methylation in wild-type. Thus, ROS1 promotes epigenetic symmetry between genomes in the endosperm by preventing paternal genome hypermethylation. We investigated dynamics of DNA methylation at the edges of transposable elements, where ROS1 is known to prevent spreading of DNA methylation into neighboring regions of the genome. We found that DNA methylation spreading in ros1 mutant is unidirectional, which has implications for the field’s understanding of the mechanism of ROS1 activity at TEs as well as the mechanism of methylation establishment at TEs. We have investigated the regulation of ROS1 expression by interaction of ROS1 and the RdDM pathway at the ROS1 promoter. Using a previously characterized deletion in the ROS1 promoter, we investigated the consequences of ROS1 regulation across the genome in the presence of a wild-type RdDM pathway. Finally, I discuss the implications of the work I have done in understanding the role of ROS1 across plant development and the mechanisms by which DNA methylation is patterned in plants, and propose future directions related to these findings.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162129</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conflict between bacteriophages and a mobile genetic element in bacterial immunity</title>
<link>https://hdl.handle.net/1721.1/162128</link>
<description>Conflict between bacteriophages and a mobile genetic element in bacterial immunity
Loyo, Christian L.
Bacteriophages (phages) are the most abundant biological entities on the planet. They are ubiquitous and numerous across the many environments bacteria are found. To combat phage predation, bacteria have evolved numerous immune strategies, so-called anti-phage defense systems. Likewise, phages encode counter-defenses that prevent the function of anti-phage defense systems. Many anti-phage defense systems are found within mobile genetic elements, like plasmids, temperate bacteriophages, and integrative and conjugative elements (ICEs). In this thesis, I show how an ICE in the bacterium Bacillus subtilis, called ICEBs1, protects populations of cells from phage predation by phages in the SPβ family. ICEBs1 has a phage defense system, spbK, which, upon phage infection, causes cell death prior to generating phage progeny. This mechanism of phage defense is considered abortive infection and protects populations of cells via altruistic cell death of infected cells. I show that during SpbK-mediated abortive infection, cells experience NAD⁺ depletion dependent on the Toll-interleukin-1 receptor (TIR) domain of SpbK. Depletion of NAD⁺ likely starves both the cell and infecting phage of energy, killing the cell and preventing the generation of phage progeny. I found that SpbK recognizes phage infection by recognizing and binding to the phage portal protein, YonE, through an interaction between the N-terminus of SpbK and the clip domain of YonE. Furthermore, I show that a gene in the SPβ-like phage Φ3T, nip, was necessary and sufficient to prevent SpbK-mediated anti-phage defense. I found that Nip binds to the TIR domain of SpbK and inhibits NADase activity to prevent abortive infection and enable viable phage production. These findings highlight the conflicts that occur between mobile genetic elements and the co-evolutionary arms race between bacteria and phages.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162128</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Path Forward: Gentrification Management Strategies in Rural Trail-Based Outdoor Recreation Economies</title>
<link>https://hdl.handle.net/1721.1/162127</link>
<description>The Path Forward: Gentrification Management Strategies in Rural Trail-Based Outdoor Recreation Economies
Smith, Mistaya
Rural communities in the United States face economic challenges due to a combination of factors including the decline of the extractive sector, the departure of manufacturing, the agglomeration of farmland, and the regionalization of key public services. To some policymakers, this economic decline, in combination with the nation’s rural-urban political stratification, serves as reason to further abandon rurality and promote migration to urban areas. These policies overlook the interdependence between rural and urban ecosystems and ignore rural America’s unique assets. In capitalizing on rurality’s existing natural beauty and land access, the trail-based outdoor recreation economy functions as a form of asset-based economic development in rural communities. In connecting recreators to the land, serving as the setting of social connection, and creating place-based connections across time, trails further benefit rural communities through the construction of place attachment. Investment in trails as a form of economic development, however, commodifies nature so as to attract external interest in rural places. Externally-driven population increases and wealth influxes in rural communities can cause physical gentrification in the form of rising property values and resident displacement. This gentrification process also contains a cultural component as the commodification of nature and the demographic shift in rural places erodes place attachment between longtime residents and the land through the displacement of local place-based knowledge, changes in traditional land access, and disruption to recreational use patterns. Research suggests that those with deeper place attachments exhibit greater civic engagement, a deeper sense of community and belonging, and more care for their community and environment. Therefore, cultural gentrification can also lead to a decline in community care and a risk to rural vitality. This thesis examines five rural Northeastern towns with trail-based outdoor recreation economies to discern how each community approaches the risks of physical and cultural gentrification.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162127</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wildfire Risk Management for Informal Settlements in Chile</title>
<link>https://hdl.handle.net/1721.1/162126</link>
<description>Wildfire Risk Management for Informal Settlements in Chile
Sakai, Yuri
This thesis explores the critical intersection of wildfire risk and informal settlement development in Chile, focusing on the municipality of Viña del Mar. This city experienced the deadliest wildfires in the nation’s history in 2024 and holds the nation’s highest concentration&#13;
of informal settlements. Despite this double vulnerability, the city has inadequately integrated wildfire resilience into its disaster risk management (DRM) framework, creating an urgent&#13;
need for policy reform.&#13;
&#13;
Through combined statistical and geospatial analyses, the author documents informal settlements’ expansion trajectories, especially between 2011 and 2024, and systematically assesses their wildfire exposure. Utilizing unregularized community datasets, wildfire risk classifications, and municipal planning documents, the analyses revealed that the growth of informal settlements outpaces regularization interventions. They also unveiled that all of the informal&#13;
communities in the city, including their wildland-urban interface zones, face significant fire risk.&#13;
&#13;
These findings further led the research to evaluate the current Chilean wildfire governance under Law 21.364 (enacted 2021) to provide comprehensive DRM across national, regional, and municipal administrative levels. Additionally, the study examines the disaster response mechanisms for the 2024 Chile Wildfires. This policy and evidence-based analyses identify inherent still reactive approaches to disasters even 4 years after the policy transition, and reveal a systematic marginalization of informal settlements.&#13;
&#13;
Based on these findings, the research culminates in phase-specific actionable policy recommendations addressing the compound vulnerabilities of informal communities through: 1) enhanced shelter capacity estimation methodologies; 2) formalized private sector involvement; 3) integrated tsunami-wildfire warning systems; 4) periodic intergovernmental learning opportunities; and 5) technical support in reconstruction. Given the 2024 tragedy and Chile’s transition toward comprehensive DRM, these interventions are particularly crucial to accelerate its transition and establishment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162126</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Effects of Immigration on Labor Markets</title>
<link>https://hdl.handle.net/1721.1/162125</link>
<description>Essays on the Effects of Immigration on Labor Markets
Gulek, Ahmet
This thesis consists of three chapters on the effects of immigration on labor markets. The first chapter studies the effects of an informal labor supply shock on the host regions, the second chapter investigates the spillover effects on non-host regions through the production network, and the third chapter provides a method that the first two chapters rely on.&#13;
&#13;
The first chapter studies the effects of Syrian refugees, who are denied work permits and thus can only work informally, on Turkish firms and workers. Using travel distance as an instrument for refugee location, I show that low-skill natives lose both informal and formal salaried jobs. I document two mechanisms: formal firms reduce their formal labor demand and new firms do not enter the formal economy. Estimates imply an elasticity of substitution of 10 between formal and informal workers. Counterfactual exercises predict that granting refugees work permits would have created up to 120,000 formal jobs in the economy through higher informal wages.&#13;
&#13;
The second chapter, co-written with Tishara Garg, investigates how immigration-induced wage shocks can propagate beyond the regions receiving immigrants through the production network. Using the Syrian refugee crisis in Turkey as a quasi-experiment and the near universe of domestic firm-to-firm transaction data from VAT records, we show that the immigration shock propagates both forward and backward along the supply chain. Firms in non-host regions who directly or indirectly buy from host regions demand more labor. Firms who sell to host regions weakly increase their sales. Estimates imply an elasticity of substitution between labor and intermediate goods of 0.76 and an elasticity of substitution of nearly 1 between intermediates. Counterfactual analyses show that the spillover effects on non-host regions are economically meaningful when the host regions are central nodes of the domestic trade network. For example, a 1% increase in labor supply in Istanbul decreases real wages in Istanbul by 0.56% and increases real wages in the average non-host city by 0.38%.&#13;
&#13;
The third chapter, co-written with Jaume Vives-i-Bastida, proposes a Synthetic Instrumental Variables (SIV) estimator for panel data that combines the strengths of instrumental variables and synthetic controls to address unmeasured confounding. We derive conditions under which SIV is consistent and asymptotically normal, even when the standard IV estimator is not. Motivated by the finite sample properties of our estimator, we introduce an ensemble estimator that simultaneously addresses multiple sources of bias and provide a permutation-based inference procedure. We demonstrate the effectiveness of our methods through a calibrated simulation exercise, two shift-share empirical applications, and an application in digital economics that includes both observational data and data from a randomized control trial. In our primary empirical application, we examine the impact of the Syrian refugee crisis on Turkish labor markets. Here, the SIV estimator reveals significant effects that the standard IV does not capture. Similarly, in our digital economics application, the SIV estimator successfully recovers the experimental estimates, whereas the standard IV does not.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162125</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Still Working: Re-examining America’s Urban Working Waterfronts</title>
<link>https://hdl.handle.net/1721.1/162124</link>
<description>Still Working: Re-examining America’s Urban Working Waterfronts
Zhang, Mabelle
While American urban waterfronts once served as critical sites of production, they are now disappearing, reflecting larger de-industrialization trends. This thesis argues for a critical re-examination of the continued and evolving role that waterfronts play as sites of work. It expands the definition of urban working waterfronts to include sites of industry, production, and economic activity, thereby aligning with these sites’ historic and ongoing uses. &#13;
&#13;
This thesis examines four working waterfronts in the Northeastern United States, a region with over 400 years of urban development driven by and around its waterfronts. This thesis examines this through four case studies: Central Waterfront in Portland, ME; Waterfront District in New Bedford, MA; Waterfront at Port Morris, NY; and Waterfront at Sunset Park, NY. &#13;
&#13;
Through analyzing these cases, this thesis proposes a typology of working waterfronts :the Traditional Working Waterfront, the Industrial Working Waterfront, and the Hybrid Working Waterfront—based on key differences in uses, forms, and governance. &#13;
&#13;
This thesis argues that the central issue is not merely protecting working waterfronts, but understanding how they are adapting to new realities. State and community-driven protections through zoning help protect existing working waterfronts, however; these sites are not stagnant relics of historic working waterfronts—rather, they are ever-evolving in response to new economic realities through incorporating new industries, technologies, and public access into their sites.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162124</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pedestrian Accessibility and Individual’s Subjective Happiness</title>
<link>https://hdl.handle.net/1721.1/162123</link>
<description>Pedestrian Accessibility and Individual’s Subjective Happiness
Shikida, Aika
Cities in many countries are taking steps to use happiness as a formal policy measure of well-being, in addition to more commonly used economic indicators such as Gross Domestic Product. Economists and public policy and public health scholars have researched the factors that are associated with happiness, linking higher self-reported happiness outcomes with financial status, gender, social interactions, personal health, and sense of security. However, the link between happiness and the built environment around one’s home or workplace has been understudied and remains poorly understood. While location quality — particularly pedestrian accessibility to commercial, recreational, institutional, educational, and transportation facilities — is known to affect home location values, how the same set of location attributes that affect housing prices may have a relationship with happiness remains unclear. In theory, more convenient home locations offer individuals the capacity for independent living (e.g., walking access to destinations), social interactions (e.g., chance encounters with community members), and a sense of belonging (e.g., through self-sufficient neighborhood amenities) — qualities that should also contribute to happiness. This thesis reports on an exploratory analysis of location quality and self-reported happiness in the United States and Japan. Using a customized pedestrian accessibility metric, this thesis examines how access to daily destinations is related to individuals’ subjective happiness, controlling for socio-demographic variables. In the U.S. data, we found that people living in areas with higher pedestrian accessibility to destinations were not necessarily more likely to report being happier, on average. In fact, there was a small tendency for individuals in these areas to report slightly lower happiness levels, on average, after accounting for other influences such as age, income, and marital status. Note that the relationship between pedestrian accessibility and happiness may be more complex than expected and may involve other factors (e.g., presence or absence of greenery). We conducted an additional analysis by dividing the Census tracts into two groups based on population density. In areas with lower population density, the relationship between pedestrian accessibility and happiness remained negative and statistically significant and showed the same strength as the overall analysis. For Nagasaki, Japan, there was not a statistically significant relationship between happiness and pedestrian accessibility, but this might be due to a problem in the street network data, so further investigation is required. In addition, a qualitative analysis of Nagasaki reveals that residents report that problems with the walking environment (e.g., narrow sidewalks, slopes and stairs, darkness at night, road surface differences, distance to facilities) influence their travel behavior and happiness. Nevertheless, although the results of this thesis have limitations, as described above, promoting pedestrian accessibility should remain an important consideration for policy makers when setting public policy goals, since pedestrian accessibility could, for instance, lead to improved physical and mental health, as well as other benefits. For both the U.S. and Japan, future work is necessary to understand the complex experiences of individuals that include spatial, psychological, and environmental factors related to the built walking environment.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162123</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Political Economy</title>
<link>https://hdl.handle.net/1721.1/162122</link>
<description>Essays in Political Economy
Sapiro-Gheiler, Eitan
In this thesis, I describe three approaches to political communication and decision-making. Chapter 1, ``Persuasion with Ambiguous Receiver Preferences,'' studies an informed Sender who knows only the average threshold belief needed to persuade a Receiver and wishes to safeguard against unfavorable distributions of individual preferences. Chapter 2, ``Discovery through Trial Balloons,'' examines how correlation between different projects affects information disclosure by a principal who designs a bundle of projects that an agent can then choose to approve. Chapter 3, ``Strategic Opinion-Writing on Appellate Courts,'' describes how and why the partisan composition of quasi-random panels of judges on the U.S. Federal Courts of Appeals affects consensus-building. I describe each chapter in more detail below.&#13;
&#13;
The first chapter, ``Persuasion with Ambiguous Receiver Preferences,'' describes a Bayesian persuasion problem where Receiver has a private belief  cutoff for Sender’s preferred action and Sender has maxmin preferences over all Receiver type distributions with known mean and bounds. This problem can be represented as a zero-sum game where Sender chooses a mean-preserving contraction of the prior over states and adversarial Nature chooses a Receiver type distribution. I formalize the connection between maxmin persuasion and similar games used to model political spending, all-pay auctions, and competitive persuasion. In both a standard binary-state setting and a new continuous-state setting, Sender optimally linearizes the prior distribution over states to create a distribution of posterior means that is uniform on a known interval with an atom at the lower bound of its support.&#13;
&#13;
The second chapter, ``Discovery through Trial Balloons,'' presents a model of a principal and an agent who face symmetric uncertainty about the agent's value for two correlated projects. The principal chooses which project values to publicly discover and makes a proposal to the agent, who accepts if and only if the expected sum of values is positive. I characterize optimal discovery for various principal preferences: maximizing the probability of the grand bundle, of having at least one project approved, and of a weighted combination of projects. My results show when discovering ex-ante disfavored projects may be optimal; these conclusions rationalize the inclusion of controversial policies in omnibus bills and the presence of moonshot projects in organizations.&#13;
&#13;
The third chapter, ``Strategic Opinion-Writing on Appellate Courts,'' studies consensus and decision-making by powerful judges on the U.S. Federal Courts of Appeals. Using quasi-random three-judge panels on these courts from 1970\textendash 2013 I document a novel pattern in dissenting opinions: compared to party-unanimous panels, party-mixed panels cause all judges to dissent more often, and at equal rates. This result is incompatible with classical models of judicial politics and is unique to partisanship. To explain my results, I introduce a theoretical framework where judges' favored coalitions are more homogeneous along both partisan and non-partisan dimensions. Using judge metadata, I find suggestive evidence for the model's result that polarization increases dissents by judges of panel-minority law school or gender. With state-of-the-art machine learning tools from natural language processing, I generalize beyond dissents, showing that those same features drive differences in opinion text even when rulings are unanimous.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162122</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering the biogenesis pathways for human mitochondrial alpha-helical outer membrane proteins using genome-wide approaches</title>
<link>https://hdl.handle.net/1721.1/162121</link>
<description>Uncovering the biogenesis pathways for human mitochondrial alpha-helical outer membrane proteins using genome-wide approaches
Muthukumar, Gayathri A.
Mitochondria are critical double-membraned organelles that act as biosynthetic and bioenergetic cellular factories, with the outer membrane providing an interface with the rest of the cell. In humans, the outer mitochondrial membrane (OMM) contains ~110 different proteins which are encoded in the nuclear genome, synthesized in the cytosol and must be targeted to the membrane. OMM proteins are defined by the secondary structure of their transmembrane domains (TMDs), as two classes – beta-barrel proteins, evolutionarily derived from the outer membranes of gram-negative bacteria, and alpha-helical proteins, an evolutionarily more recent class. Beta-barrel proteins are first translocated into the mitochondrial intermembrane space (IMS) via the translocase of the outer membrane (TOM) and subsequently inserted by the sorting and assembly machinery (SAM) complex. Comparatively, alpha-helical OMM protein biogenesis is poorly understood. Alpha-helical proteins are classified as signal-anchored (a single N-terminal anchored TMD), tail-anchored (a single C-terminal anchored TMD) and polytopic (multiple TMDs), by the number and orientation of their TMDs with respect to the membrane. While the novel OMM insertase MTCH2 was discovered using a genome-wide CRISPRi screen for alpha-helical tailanchored substrates (Guna et. al, 2022), the broader biogenesis and targeting pathways for all biophysically diverse alpha-helical proteins remained unexplored. Critically, the mechanisms of cytosolic chaperoning and targeting for all alpha-helical OMM proteins were unknown. This thesis presents a large-scale investigation that systematically delineates alphahelical biogenesis pathways, from cytosolic chaperoning to membrane insertion to quality control of unassembled or mis-localized TMDs. Genome-wide CRISPRi screens in human cells for varied signal-anchored and polytopic substrates revealed novel cytosolic chaperones, targeting and quality control factors. Arrayed follow-up genetic screens against a large and biophysically more varied panel of substrates revealed that alpha-helical proteins are triaged in the cytosol by TMD number and topology, thus defining a set of ‘rules’ for biogenesis. Cell biological and in vitro biochemistry experiments further discovered a new role for the ribosome-bound chaperone NAC in regulating polytopic protein biogenesis and characterized a novel signal-anchored targeting factor TTC1 that chaperones TMDs using a conserved C-terminal hydrophobic groove. Cumulatively, this work both defines the pathways for biogenesis and quality control of alphahelical OMM proteins and identifies mechanisms by which mitochondrial protein composition and thereby function can be tuned through manipulation of mitochondrial membrane protein biogenesis machinery in diverse pathophysiological conditions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162121</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Listeria monocytogenes crosses host cell barriers</title>
<link>https://hdl.handle.net/1721.1/162120</link>
<description>How Listeria monocytogenes crosses host cell barriers
Hanna, Ruth
Listeria monocytogenes is a bacterial pathogen that causes listeriosis, a severe food-borne illness that can cause severe complications and mortality in immunocompromised or pregnant people. Listeria is able to cross several host barriers to cause severe disease, including the intestinal barrier, the blood-brain barrier, and the placental barrier. This is mediated by a diverse range of bacterial factors. In this review, I outline the key host barriers encountered by Listeria during host infection and the mechanisms by which Listeria crosses each barrier.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162120</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and use of advanced nuclear diagnostics and neural networks to diagnose 3D morphology and power balance in inertial fusion implosions at OMEGA and NIF</title>
<link>https://hdl.handle.net/1721.1/162119</link>
<description>Development and use of advanced nuclear diagnostics and neural networks to diagnose 3D morphology and power balance in inertial fusion implosions at OMEGA and NIF
Kunimune, Justin H.
Inertial confinement fusion (ICF) is one of several ways to perform nuclear fusion in the laboratory, and is thus appealing as a potential future energy source. Achieving high gain at ICF facilities like the National Ignition Facility (NIF) and OMEGA requires new ways of measuring implosion conditions such as the shape of the shell at minimum-volume and the power balance in the hot-spot. This dissertation describes several novel instruments and analysis techniques to measure these. First is a method to combine information from existing diagnostics that probe asymmetries, such as the neutron imaging system, the real-time neutron activation detectors, and the neutron time-of-flight spectrometers. Our technique uses a forward-fit to a simplified physics model to produce a single self-consistent 3D picture of the implosion. Markov chain Monte Carlo is used to provide robust uncertainty quantification. Second is a knock-on deuteron imager to measure deuterons elastically scattered out of the shell by fusion neutrons. This diagnostic would enable a full 3D reconstruction of both the hot-spot and shell geometry. Analysis procedures were developed for this diagnostic, and commissioning experiments were carried out to validate the procedures and associated hardware, providing improved capabilities for imaging OMEGA implosions. Third is a time-resolved neutron spectrometer called MRSt which would record a time-resolved neutron spectrum. Extensive modelling of the MRSt’s response and analysis procedures has been carried out, with which it has been predicted that the system as designed will meet the top-level physics requirements needed for novel insights. A path forward for implementing this spectrometer has been identified. These projects represent significant advancements in our abilities to diagnose ICF implosions, which will improve our understanding of degradations and failure modes in ICF implosions and lead to higher gain overall. This will hopefully one day enable nuclear fusion energy as a clean energy source.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162119</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Institution and Innovation</title>
<link>https://hdl.handle.net/1721.1/162118</link>
<description>Essays on Institution and Innovation
Zhou, Jie
This dissertation explores the complex interplay between institutions and innovation across three distinct contexts: digital protectionism, academic governance, and language policy. The first essay examines whether protectionist policies can foster domestic innovation in the digital economy, focusing on China’s Great Firewall (GFW)—the world’s most extensive system of internet censorship. Leveraging the quasi-random timing of foreign app blockages, I find that Chinese substitute apps experienced a 30% increase in user base following foreign app bans. Using novel data extracted from compiled app code, I show that in-house technological development at these firms rose by 14% two years after blockage. This innovation diffused broadly, as both Chinese and foreign apps subsequently adopted more Chinese-origin technologies. I further document that expanded access to user data—enabled by increased data requests and third-party sharing—was a key driver. Quasi-random introductions of new data access types causally boosted in-house development, and firms receiving shared user data also intensified innovation. These findings suggest that digital protectionism, under certain conditions, can catalyze domestic technological growth. The second essay investigates how powerful institutional actors shape academic research and innovation in China. Using data on publications from researchers at 109 top Chinese universities and leadership transitions within these institutions, I apply natural language processing (NLP) techniques to assess alignment between faculty and leader research agendas. Faculty shift their research toward that of incoming leaders—particularly those appointed by the Communist Party—immediately after leadership transitions. This influence is stronger in fields with histories of political control or academic repression. While some alignment may reflect coordination, I find significant costs to research quality: transitions to low-productivity leaders lead to sharp increases in topic similarity and declines in citation impact, especially for research most closely aligned with new leadership. These results highlight the tension between centralized control and research autonomy in high-stakes innovation environments. The third essay explores how language policy affects national identity formation, analyzing Taiwan’s Chinese language unification campaign. Exploiting variation in individuals’ age-based ability to learn Mandarin and their linguistic distance from it, I implement a difference-in-differences design to identify the policy’s long-term effects. I find that cohorts more affected by the policy became more fluent in Mandarin but were less likely to identify as Taiwanese or support self-determination. The intergenerational disruption of native language transmission plays a key role, with the identity impact comparable to 11% of the effect of losing a parent. The policy also increased consumption of state-controlled media among treated cohorts. These findings underscore how language policies can reshape political identity and social cohesion. Together, these essays show that institutions—through mechanisms of control, exclusion, and cultural shaping—play a pivotal role in determining the direction, diffusion, and societal implications of innovation. JEL code: O33, O38, L86, I23, Z13, C23
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162118</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oakland's Preservation Park: Planning for the Future</title>
<link>https://hdl.handle.net/1721.1/162117</link>
<description>Oakland's Preservation Park: Planning for the Future
Kaufman, Samantha
Preservation Park in Oakland is an anomaly. It is neither a green park nor strictly an office park, 16 historic homes, carefully renovated and maintained, are arranged around an internal way and studded with a central fountain in the Victorian style. Seeds for this park were initially planted by the city's Landmark Preservation Advisory Board in 1976 and with fits and starts, it opened in 1991.  As Interstate Highway 980 was built, the park was created as a way to save a few of the most beautiful homes under threat by the Oakland Redevelopment Authority's urban renewal clearance and construction of the highways. Interstates 580, 880, and 980 were lashed across Oakland to bring suburban commuters over the bridge to San Francisco, cutting up a city of neighborhoods and destroying thousands of homes and small businesses. Oakland envisioned this acre and a half as a permanent site for community based organizations and non-profits to revitalize the edge of downtown and West Oakland. &#13;
 &#13;
Since 1991, the office space has been rented to tens of non-profits and hosted hundreds of weddings, conferences, and other public and private events. In 2004, the community development corporation, East Bay Asian Local Development Corporation purchased the park from the city and continued to manage the property as a successful office park and event space. The COVID-19 pandemic irrevocably changed how many people work, and for the first time, Preservation Park vacancies increased and have remained substantially below 100%, presenting a challenge to EBALDC and its portfolio. This thesis seeks to provide the client with a framework to assess possible redevelopment and reprogramming schemes which is sensitive to the community goals of EBALDC and requirement for the property to sustain itself. By considering, financial feasibility and partnerships, a multi-phase roadmap with a 20-year time horizon is presented to EBALDC to consider. This will also provide a potential framework for more non-profit firms to pursue commercial real estate management and redevelopment as a strategy for community wealth-building and neighborhood stability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162117</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Governing Care in Astoria, Queens: The Role and Responsibility of the City in Supporting Community-Led Solidarity Networks</title>
<link>https://hdl.handle.net/1721.1/162116</link>
<description>Co-Governing Care in Astoria, Queens: The Role and Responsibility of the City in Supporting Community-Led Solidarity Networks
Kleinbock, Yvette
In the spring of 2020, as COVID-19 spread across New York City and the United States, an inadequate government response and an overburdened social safety net left millions facing unemployment, eviction, and food insecurity with limited institutional support. Yet alongside these systemic failures, mass acts of solidarity emerged, as unprecedented numbers of people mobilized mutual aid eff orts to help their neighbors survive. While many mutual aid groups have since disbanded or experienced burnout, others have sustained the work, helping to establish alternative infrastructures of collective care. Taking Astoria, Queens as a case, this thesis examines the political lessons that have emerged in the aftermath of the COVID-19 pandemic, focusing on what it takes to sustain community-led solidarity networks and considering City’s role and responsibility in supporting urban infrastructures of care more broadly. To conceptualize this relationship between local community eff orts and the City, I further consider the possibilities of co-governance as a framework for community care. This research utilizes a community-centered, relational, qualitative approach that draws on oral history and ethnographic traditions, including thematic analysis of key informant interviews, document review, and participant observation. Tracing the trajectory of mutual aid and other community-led eff orts in Astoria and exploring the possibilities and challenges of collaborative governance, this research imagines how planning, policy, and governance strategies in New York City can deepen collective capacity, foster resilience, and advance more just and caring urban futures.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162116</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing a Digital Common Application for Affordable Housing in Massachusetts</title>
<link>https://hdl.handle.net/1721.1/162115</link>
<description>Implementing a Digital Common Application for Affordable Housing in Massachusetts
Moss, Emily
The need for affordable housing in Massachusetts is immense, with fragmented housing application processes further compounding barriers for low-income residents to access stable housing. To address these challenges, the Massachusetts Executive Office of Housing and Livable Communities (EOHLC) initiated the development of a digital common application (Common App) in 2024 to streamline tenant application and selection processes for privately owned publicly subsidized housing opportunities throughout the state. This client-based thesis offers an implementation roadmap for EOHLC to successfully operationalize the Common App within the agency.&#13;
&#13;
The roadmap is structured around three topics as requested by EOHLC: (1) organizational design considerations as the Common App scales, including internal staffing models, external vendor relationship management, and budget planning; (2) long-term technical integration opportunities, including identifying relevant data systems likely to interact with the Common App and potential areas for alignment; and (3) compliance mechanisms to ensure housing providers’ participation in the Common App, including a review of Massachusetts fair housing regulations as one possible strategy to require or incentivize providers to use the platform.&#13;
&#13;
Each topic draws from a review of state policies as well as academic literature in organization studies, information systems, and public administration; stakeholder interviews; and case study research on digital affordable housing search and application platforms in Massachusetts, Detroit, San Francisco, and the Bay Area—culminating in a series of recommendations for EOHLC to effectively administer the Common App over the long term.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162115</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ensuring Equitable Tenant Outcomes: Case Studies of Building Decarbonization Initiatives in Greater Boston, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/162114</link>
<description>Ensuring Equitable Tenant Outcomes: Case Studies of Building Decarbonization Initiatives in Greater Boston, Massachusetts
Wong, Nicole
U.S. cities are ramping up building decarbonization initiatives to reduce greenhouse gas emissions from buildings. However, these programs and policies generate complex challenges at the intersection of housing, climate, and environmental justice, especially for cities that face barriers to adopting strong renter protections. This thesis offers two case studies regarding tenant-related equity concerns that emerged during the implementation of building decarbonization initiatives in greater Boston, Massachusetts: Boston’s building performance standard the Building Emissions Reduction and Disclosure Ordinance (BERDO) and Everett’s energy efficiency incentive program Electrify Everett. This thesis also identifies strategies that residents, community organizations, and city officials highlight as important to advance building decarbonization without generating unintended consequences for tenants. &#13;
Key equity concerns include the potential impacts of building decarbonization on rental affordability, displacement, and energy burden, whereas strategies include broad tenant protections such as rent control, renter protections attached to building decarbonization subsidies, and robust enforcement mechanisms. This research illuminates the need to build power to win essential tenant protections, focus decarbonization on housing with existing affordability protections, and advance alternative, decommodified forms of housing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162114</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neutronic Performance and Thermal Hydraulic Analysis of the MIT Reactor Fission Converter Experimental Facility Using High-Density U-10Mo Low-Enriched Uranium Fuel Elements</title>
<link>https://hdl.handle.net/1721.1/162113</link>
<description>Neutronic Performance and Thermal Hydraulic Analysis of the MIT Reactor Fission Converter Experimental Facility Using High-Density U-10Mo Low-Enriched Uranium Fuel Elements
Sears, Caroline Julia
The MITR fission converter (FC) is a core-driven subcritical assembly at the MIT Nuclear Reactor Laboratory, located on the MIT campus in Cambridge, MA. The assembly is made of eleven partially-depleted MITR-II fuel elements in a separate cooling tank attached to the side of the core-tank graphite reflector. The FC serves to boost the thermal flux from the core and send a hardened neutron spectrum to an irradiation target, providing a fission energy flux spectrum without the need to put a sample inside the core tank. It was previously used for boron-neutron capture therapy clinical trials before its decommissioning in the 2010s. Recently, it has been modified from a medical beamline to a general-use engineering and materials testing facility. The new FC-based experimental facility has roughly one cubic meter of empty space downstream intended to contain large experiments, called the m³. This work is a safety and performance study aimed at quantifying the impact of modifying the facility’s geometry as part of the FC’s recommissioning, as well as the impact of changing its fuel from HEU to LEU fuel as part of the MITR LEU conversion project. Neutronics and thermal hydraulics analysis of the renovated facility have been performed using the codes MCNP5 and STAT7, respectively. This analysis quantified the FC’s k_eff, power distribution, multi-group neutron flux, and conditions which cause onset of nucleate boiling (ONB). It was determined that the FC assembly will remain subcritical (k&#13;
_eff &lt; 0.9) and low power (≤200 kW) under a wide range of performance conditions, including with both types of fuel and a variety of materials on the target-side of the FC tank. The HEU-fueled FC is expected to require no changes to the limiting safety system settings (LSSS) outlined in the original technical specifications document. The LEU fuel is expected to increase the FC performance, but as a tradeoff, will require minor changes to the LSSS setpoints to maintain margin to ONB under the most limiting thermal-hydraulic conditions. Additionally, this study evaluates the feasibility of using the FC for in-assembly fuel experiments, particularly as a pathway for testing the new LEU fuel elements at low power. This study indicated that this proposed FC configuration with one LEU and ten HEU elements is feasible and maintains wide safety margins.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162113</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Supervisory Control System as a Transition Technology Towards Autonomous Reactor Plant Operations</title>
<link>https://hdl.handle.net/1721.1/162112</link>
<description>Development of a Supervisory Control System as a Transition Technology Towards Autonomous Reactor Plant Operations
Fortier, Lauren G.
The economic viability of small and microreactors depends on reducing energy generation costs. The implementation of autonomous reactor control systems provides an avenue for reducing operations and maintenance expenses. Advanced reactor designs with enhanced passive safety features, reduced source terms, and digital instrumentation and control systems, directly support autonomous controllers. In these plants, where the need for human operators is already reduced, the introduction of supervisory control systems (SCS) for dynamic operations further lessens operator dependence while building trust in these systems, laying a solid foundation for the transition to fully autonomous reactor control.  &#13;
&#13;
Finite state automata (FSA) provide a framework for engineering fully verifiable and validatable supervisory controllers, and thereby facilitate the transformation to autonomous operations in nuclear power plant operations. FSA serve as a foundational mathematical tool for modeling discrete event systems (DES). Properties such as nonblocking and controllability can be formally demonstrated and verified by leveraging the extensive set of mathematical proofs within the scope of regular languages. Furthermore, a DES can be directly linked to reactor plant systems and operational procedures within a hierarchical architecture by using a graded functionalization approach analogous to that of complex dynamic systems, such as self-driving vehicles. In this scheme, feedback controllers can regulate low-level actuation functions while a supervisory controller can govern high-level plant state transitions. &#13;
&#13;
A generic supervisory controller was developed as a transition technology toward autonomous reactor operations. This controller was then tailored for application on a limited feedback model, for initial proof-of-concept testing, and then was scaled for use on light water reactor (LWR) simulators. In the absence of advanced reactor simulators for operational testing, LWR simulators were used because they provide realistic feedback and controls within a more conservative operating margin than advanced reactors. These supervisory controllers successfully executed operational procedures within a fully verifiable framework, establishing the foundation of this modeling approach and laying the groundwork for its implementation in advanced reactor designs. This scalable model thus facilitates a smooth transition from functioning as an operator aid to fully autonomous operation as a comprehensive plant controller, increasing the economic viability of nuclear power.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162112</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Causal Estimation</title>
<link>https://hdl.handle.net/1721.1/162111</link>
<description>Machine Learning for Causal Estimation
Quintas-Martínez, Víctor M.
The intersection of causal inference and machine learning (ML) has given rise to powerful tools for tackling complex empirical questions, especially in high-dimensional or highly nonlinear settings where traditional methods often fall short. This thesis develops and analyzes novel ML-based methods for estimating causal effects, with a focus on flexibility, robustness, and valid statistical inference.&#13;
&#13;
The first chapter addresses the challenge of regularization and model selection bias that arises when ML is used to estimate nuisance parameters. We propose a new framework for automatic debiased machine learning (DML), which we term Riesz regression. This approach constructs debiased estimating equations without requiring explicit characterizations of the debiasing terms, allowing for seamless integration with any ML algorithm. We extend the framework to generalized regressions, including high-dimensional generalized linear models (GLMs). To illustrate its practical value, we apply Riesz regression to a study of discrimination in lending, showing how neural networks can be leveraged for automatic debiasing. Monte Carlo simulations demonstrate that our method frequently outperforms conventional inverse propensity weighting approaches.&#13;
&#13;
The second chapter introduces a new method for causal change attribution, which quantifies how different causal mechanisms contribute to shifts in the distribution of an outcome variable over time or across groups. Building on a given causal model, our approach combines regression and re-weighting to identify and estimate the relevant counterfactual quantities. Our methodology is multiply robust, meaning it remains valid even when some components of the model are misspecified. We establish consistency and asymptotic normality. Moreover, we show how our algorithm can be embedded into popular attribution frameworks such as Shapley values, which then inherit its statistical guarantees. Simulation studies confirm the excellent performance of our method, and we demonstrate its utility through an applied case study.&#13;
&#13;
The third chapter tackles a common challenge in applied work: estimating and conducting inference on many related causal parameters, such as causal effects of many treatments or on multiple outcomes. We derive uniform error bounds and construct valid simultaneous confidence bands for collections of average treatment effects (ATEs) estimated via DML. Our framework accommodates both finite sets and continua of functionals, and leverages strong Gaussian approximation results to account for dependence across estimates. This enables rigorous simultaneous inference with control over familywise error rates.&#13;
&#13;
Together, these contributions advance the state of the art in machine learning for causal estimation by unifying flexible modeling with rigorous inferential theory. The methods developed are broadly applicable to problems in economics, public policy, healthcare, and beyond, where understanding causal relationships in complex, data-rich environments is essential. This thesis emphasizes practical applicability while maintaining strong theoretical guarantees, equipping researchers with tools to make credible, data-driven causal claims.&#13;
JEL: C14, C21, C45
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162111</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Information Economics</title>
<link>https://hdl.handle.net/1721.1/162110</link>
<description>Essays on Information Economics
Veiel, Rafael
This thesis contains 5 chapters. Every chapter deals with the question of how information affects equilibrium behavior in strategic problems. &#13;
&#13;
Chapter 1 is my job market paper "Limits of Global Games.'' It considers the impact of information on equilibrium multiplicity in two-player games of strategic complementarities. Games with strategic complementarities often exhibit multiple equilibria. In a global game, players privately observe a noisy signal of the underlying payoff matrix. As the noise diminishes, a unique equilibrium is selected in almost all binary-action games with strategic complementarities - a property known as "limit uniqueness.'' This chapter describes the limits of that approach in two-player games, as we move beyond two actions. Unlike binary-action games, limit uniqueness is not an intrinsic feature of all games with strategic complementarities. When the noise is symmetric, we demonstrate that limit uniqueness holds if and only if the payoffs exhibit a generalized ordinal potential property. Moreover, we provide an example illustrating how this condition can be easily violated.&#13;
&#13;
Chapter 2 is co-authored with Olivier Gossner and is titled "Strategic Type Spaces.'' We provide a strategic foundation for information: in any given game with incomplete information we define strategic quotients as information representations that are sufficient for players to compute best-responses to other players. We prove 1/ existence and essential uniqueness of a minimal strategic quotient called the Strategic Type Space (STS) in which a type is given by an interim correlated rationalizability hierarchy together with the set of beliefs over other players' types and nature that rationalize this hierarchy 2/ that this minimal STS is a quotient of the universal type space and 3/ that the minimal STS has a recursive structure that is captured by a finite automaton.&#13;
&#13;
Chapter 3 is also co-authored with Olivier Gossner and is titled "Information Design for Rationalizability.'' We study (interim correlated) rationalizability in games with incomplete information. For each given game, we show that a simple and finitely parameterized class of information structures is sufficient to generate every outcome distribution induced by general common prior information structures. In this parameterized family, players observe signals of two kinds: A finite signal and a common state with additive, idiosyncratic noise. We characterize the set of rationalizable outcomes of a given game as a convex polyhedron.&#13;
&#13;
Chapter 4 is co-authored with Stephen Morris and Dirk Bergemann and is titled "A Strategic Topology on Information Structures.'' Two information structures are said to be close if, with high probability, there is approximate common knowledge that interim beliefs are close under the two information structures. We define an "almost common knowledge topology'' reflecting this notion of closeness. We show that it is the coarsest topology generating continuity of equilibrium outcomes. We show that finite information structures are dense in the almost common knowledge topology and thus it is without loss to restrict attention to finite information structures in information&#13;
design problems.&#13;
&#13;
Finally, chapter 5 is a short note describing an information aggregation mechanism that can be used by players before playing a game of strategic complementarities under incomplete information. In such a game, players may have an incentive to share overly optimistic information with other players, thus inducing them to play higher actions. In this mechanism, players trade a token before playing the game. Players who want to communicate good news must purchase this worthless token and burn resources. The note shows that players only need to observe the market clearing price that arises from the token trades to aggregate their private information. Each element in a player's private information set is encoded as a prime in the prime factorization of the market clearing price. The element that is contained in every player's information set is identified as the prime with the highest multiplicity.&#13;
JEL Classification Codes: C72, D82
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162110</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on the Economics of Private and Social Insurance</title>
<link>https://hdl.handle.net/1721.1/162109</link>
<description>Essays on the Economics of Private and Social Insurance
Solomon, Adam
The first chapter, joint with Sylvia Klosin, studies the 'scope' of insurance. Distinct risks are typically insured separately. A single 'aggregate' contract that pays more when many shocks occur simultaneously, but less when positive shocks offset negative shocks, is utility-increasing absent moral hazard. However, an aggregate contract discourages diversification, leading to a novel insurance-incentive trade-off. We study the US Federal Crop Insurance Program (FCIP), where farmers can choose the `scope' of their policy - whether to insure each field separately, or all fields of the crop as an aggregate unit. Starting in 2009, the FCIP introduced a large subsidy increase for aggregate insurance. We show that farms that moved to aggregate insurance reduced crop diversity and irrigation, farmed less and conserved more land, and insured price risk  ---  all reducing the diversification of their risks. This increased the variability of farm yield by 14%, raising the fiscal cost of aggregate insurance by about $1.5 billion per year.  We derive and estimate a formula for the optimal contract scope.  We find that an aggregate policy is never welfare maximizing, but that the optimal policy lies partway between separate and aggregate. More generally, we discuss scope's widespread relevance in insurance design.&#13;
&#13;
The second chapter, proceeds from the fact that increasing climate risk has caused insurance in many locations to become unaffordable or unavailable. I study a novel policy response in Australian home insurance: government provided, mandatory, actuarially fair, reinsurance for cyclone damage. In this scheme, the government reinsures the cyclone risk, while the private market covers the remaining  idiosyncratic risk. I find that public reinsurance leads to a 21% decrease in home insurance premiums and an 11% increase in the probability of insurance being offered at all. In terms of mechanisms, I rule out subsidization and show that the ambiguity of the risk has a minimal impact on premiums and insurance offerings. Instead, the entirety of the increase in insurance offered, and much of the decrease in premiums, comes from reducing the implicit costs associated with insuring spatially correlated risk. Increased competition due to insurer entry explains the remaining premium reductions. This isolates the cause of  market dysfunction - correlated risk - and suggests that public reinsurance is a cost-effective policy to rehabilitate insurance markets for catastrophic climate risks.&#13;
&#13;
The third chapter, studies bundling in insurance contracts. Every insurance contract bundles risks, and explicit bundling discounts are common. I show theoretically that bundling arises in a competitive market whenever correlation between risk types enables insurer 'cream-skimming': willingness-to-pay for insurance against one risk must be negatively correlated with expected costs from the other risk. I analyze long-term care insurance, in which both-spouse bundles are discounted by 20-35%. I show that cream-skimming incentives are sufficient to explain these discounts, and model-predicted equilibrium bundling discounts closely match empirical discounts. I rule out standard economies-of-scale and differential contract lapsation as alternate explanations of the offered discounts. Counterfactually, banning bundling would raise welfare by 10% by correcting separate-market unraveling, while mandatory family bundling would reduce welfare by 15% by exacerbating advantageous selection.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162109</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Development Economics and Trade</title>
<link>https://hdl.handle.net/1721.1/162108</link>
<description>Essays in Development Economics and Trade
Wiles, Edward
This thesis comprises three chapters. &#13;
&#13;
The first chapter, written with Deivy Houeix, studies search and trust frictions, which have historically made it hard for small firms in lower-income countries to buy inputs from foreign markets. The growth in smartphone ownership and social media usage has the potential to alleviate these barriers. Informed by a dynamic model of relational contracting, we run a field experiment leveraging these technological tools to provide exogenous variation in (1) search frictions and (2) trust frictions (adverse selection and moral hazard) in a large international import market. In our search treatment, we connect a randomly selected 80% of 1,862 small garment firms in Senegal to new suppliers in Turkey. We then cross-randomize two trust treatments that provide additional information about the types (adverse selection) and incentives (moral hazard) of these new suppliers. Alleviating search frictions is sufficient to increase access to foreign markets: in all treated groups, firms are 26% more likely to have the varieties a mystery shopper requests and the goods sold are 30% more likely to be high quality. However, the trust treatments are necessary for longer-term impact: using both transaction-level mobile payments data and a follow-up survey, we show that these groups are significantly more likely to develop the connections into relationships that persist beyond the study. These new relationships lead to increases in medium-run profit and sales. Finally, we use the treatment effects to estimate the model and evaluate counterfactuals where we set various combinations of the frictions to zero, finding that the largest gains come from eliminating adverse selection.&#13;
&#13;
The second chapter, written with Habib Ansari and Dave Donaldson, is motivated by a modern revolution in spatial economic modeling that aims to answer quantitative counterfactual questions by using models that feature micro-level heterogeneity. This heterogeneity is then often assumed to come from particular parametric families---such as Frechet in Eaton and Kortum's (2002) Ricardian model.  While these parametric choices greatly enhance the tractability of model simulations, it is unknown how sensitive the answers to counterfactual questions are to these assumptions of convenience because there are infinitely many alternative distributions of heterogeneity to be evaluated. We overcome this challenge by building a general trade model that leverages recent advances in the robustness literature. Our method calculates sharp bounds on the values of model counterfactuals that could obtain---while still exactly matching all aggregate trade data points, a gravity-like moment condition, and satisfying equilibrium constraints---under all possible distributions of underlying heterogeneity that lie within a given divergence from a chosen reference distribution. Applying this method to the Eaton and Kortum (2002) model, we find that the gains from trade in these models could be several times larger or smaller than they appear to be under standard benchmark distributions, even if heterogeneity is drawn from a distribution that is at least as similar to Frechet as are the types of parametric alternatives that are commonly explored in sensitivity analysis.&#13;
&#13;
The third chapter, written with Tishara Garg, studies regional integration, a major issue both across and within countries. Yet, integration can take many forms, ranging from lowering tariffs to lowering administrative frictions. We provide evidence on the gains to removing administrative frictions using rich microdata on firm-to-firm trade to study a major fiscal integration reform in India. Using an event-study style regression derived from a gravity model, we estimate that the reform increased interstate trade by around 15% on average. We plug this estimate into the model and use it to calculate the aggregate and distributional welfare gains. We find that all but a handful of districts saw welfare gains, with an aggregate welfare increase of around 1%.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162108</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Anisotropic Noise and Fast Gates with Superconducting Qubits</title>
<link>https://hdl.handle.net/1721.1/162107</link>
<description>Exploring Anisotropic Noise and Fast Gates with Superconducting Qubits
Rower, David A.
Rapid recent progress in the engineering of quantum systems across multiple platforms has enabled quantum science at never-before-seen precision and scale, and may yield useful quantum technology. However, two major challenges slow such progress: (1) decoherence from interactions between target systems and uncontrolled external degrees of freedom, and (2) errors in the control of target systems, which often arise from physics beyond the models used to design control protocols. We report on three novel results addressing both coherence and control, utilizing superconducting qubits. &#13;
&#13;
Our first result is the characterization of superconducting qubit flux noise, a primary source of decoherence, under the influence of weak, in-plane magnetic fields. We reveal two trends which serve as a novel experimental benchmark for microscopic theories of flux noise: (1) a 1/f to approximately Lorentzian transition in the noise power spectral density below 1 Hz, and (2) noise suppression above 1 MHz. &#13;
&#13;
Our second result is the suppression of coherent qubit-control errors induced by the counter-rotating component of strong, linearly-polarized drives. We establish two complementary protocols for mitigating such errors, which previously limited the speed of single-qubit gates for low-frequency qubits. The first protocol realizes circularly-polarized drives in circuit quantum electrodynamics. The second protocol---commensurate pulses---uses pulse-timing restrictions to homogenize counter-rotating errors and enable their mitigation with conventional calibration routines. With commensurate pulses, we demonstrate world-class single-qubit gate fidelities reliably exceeding 99.997%.&#13;
&#13;
Our third result is the observation of a novel signature in the decoherence dynamics of qubits subject to anisotropic transverse noise. Through injected noise experiments with a fluxonium qubit, we directly observe time-domain state-purity oscillations at twice the qubit frequency arising from the intrinsic qubit Larmor precession. We probe the oscillation dependence on noise anisotropy, lab-frame orientation, and power spectral density. Such oscillations are a result of physics beyond standard qubit-decoherence models within the rotating-wave approximation, and were previously unobserved in experiment.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162107</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical Water, Wastewater, and Thermal Infrastructure Development for a Resilient Neighborhood in War-Affected Ukraine</title>
<link>https://hdl.handle.net/1721.1/162106</link>
<description>Critical Water, Wastewater, and Thermal Infrastructure Development for a Resilient Neighborhood in War-Affected Ukraine
Gendler, Isaac A.
The Central Ukrainian municipality of Tetiiv is experiencing an influx of migrants due to its relatively safe position amid the Russian invasion. Tetiiv, in collaboration with the Ukrainian NGO Vid Sertsya Budova, is building a new neighborhood to accommodate internally displaced people, refugees, war veterans, and local residents. The neighborhood will require water, wastewater, and thermal infrastructure that satisfies European Union requirements given Ukraine’s ambition to join the economic bloc. This thesis performs a pre-feasibility study to help Tetiiv and Vid Sertsya Budova create an optimal configuration of water, wastewater, and thermal infrastructure for the new neighborhood. For water infrastructure, the report calculates water consumption using the BREEAM framework, quantifies storage requirements, analyzes water quality, estimates rainwater harvesting potential, and identifies optimal water source locations within 30 km using the DRASTIC methodology combined with geospatial analysis. For wastewater infrastructure, the study estimated wastewater generation, analyzed different wastewater treatment options, and used a decision matrix to identify the most optimal wastewater system for the site, a moving bed biofilm reactor system. The thermal infrastructure study developed a conceptual heating system for the new neighborhood, incorporating ground-source heat pumps in each row house and single-family home, vertical boreholes, a thermal energy network, and a wastewater heating system for the multifamily co-living units. This study offers a blueprint for Ukraine and other regions recovering from urbicidal conflict and disaster to rebuild in alignment with the new climate paradigm.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162106</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affect in Resiliency Planning: A Conversation with Broad Channel</title>
<link>https://hdl.handle.net/1721.1/162105</link>
<description>Affect in Resiliency Planning: A Conversation with Broad Channel
Fiol, Olivia
Planning for climate change is more relevant than ever, as the earth continues to warm, sea levels rise, and no global policy or political will is in sight. In order to plan under hostile circumstances, it is of the utmost importance that planners turn our attention to the hyper-local scale, continuing momentum in our personal and professional relationships. In this thesis, I argue that centering affective experiences of place is essential in conversations about the future of places under climate change, especially in communities and neighborhoods resistant to the conversation about climate change’s impacts on their futures in the first place. This project focuses on Broad Channel, the only inhabited island community in New York City’s Jamaica Bay, which is on the front lines of sea level rise and tidal flooding in the city. I interviewed city leaders, community members, artists, planners, and activists to understand how we can move through and with affect when considering the future of a place. This can open up conversation about climate change previously inaccessible. These conversations also surfaced the need for planners to regroup and understand how their own affective positions impact difficult conversations about climate change. I offer these insights and recommendations for future resiliency planning work, reflecting both inward and outward.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162105</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Between Fields and Cities: The Politics of Land Use Changes in Punjab, India</title>
<link>https://hdl.handle.net/1721.1/162104</link>
<description>Between Fields and Cities: The Politics of Land Use Changes in Punjab, India
Kodzis, Trevor Quigley
This thesis examines the urbanization of agricultural lands in the State of Punjab, looking for patterns that explain the type of development that is occurring while embedding these transformations in a larger political and economic context. The study will focus on both transportation infrastructure and the real estate developments surrounding it, as a way of situating Punjab within a larger discourse on infrastructure and urbanization in the Global South. Through the case studies of three Punjabi cities: Mohali, Bathinda, and Ludhiana, this paper will employ remote sensing to analyze recent transformations from agricultural to developed land across different land use zones, revealing two primary patterns. First, highway infrastructure projects have been delayed because of land acquisition problems and a contentious political environment. Second, with the exception of Ludhiana, most of the real estate in Punjab is concentrated in the residential sector. This apparent stagnation of manufacturing growth in Punjab results from a wide range of political and economic factors including high land prices, protest movements, emigration, fiscal policies, geography, and competition with other states. In contrast to the rest of the state, Ludhiana has successfully attracted industrial growth, illustrating how cities that urbanized earlier follow a different path of economic development.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162104</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ozarkitecture: Shaping the Sense of a Region</title>
<link>https://hdl.handle.net/1721.1/162103</link>
<description>Ozarkitecture: Shaping the Sense of a Region
Jones, Rubin
Contemporary planning often invokes a “sense of place,” yet the deeper work of placemaking remains largely unfulfilled. In its absence, cities and regions fracture into landscapes that appear whole but feel hollow. These are spaces stripped of the sensory depth and symbolic meaning that make dwelling possible. This thesis thus returns to the concept of the genius loci—the spirit of place—not as a nostalgic embellishment, but as an ethical and practical imperative. It traces the philosophical and historical foundations of place, examines how contemporary practice has diluted its meaning, and explains why a new approach is necessary. From this foundation, the project engages Kevin Lynch’s operational models and develops a reframed approach—shifting from a visual image to an embodied experience—to ground planning practice in the textures of memory, movement, and belonging. Five new concepts—anchor, patch, joint, seam, and trail—offer a vocabulary for cultivating places that hold meaning across time and transformation. This framework is applied in Northwest Arkansas, a region where rapid growth threatens to outpace the character of its communities. By strengthening sensory experience, rooted memory, and collective authorship, this project aims to offer a different way forward through regional transit—where planning not only shapes space, but safeguards access to the ongoing, unfinished project of place itself.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162103</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Navigating Identity and Place: The Role of Displacement Camps in Community Rebuilding and Identity Preservation in Sudan</title>
<link>https://hdl.handle.net/1721.1/162102</link>
<description>Navigating Identity and Place: The Role of Displacement Camps in Community Rebuilding and Identity Preservation in Sudan
Sati, Maysaa
Displacement camps are often framed as zones of impermanence; spaces of waiting designed to contain crises, not cultivate futures. Yet, in Kalma Camp in South Darfur, displacement has given rise to a self-organized, complex urban environment shaped by collective labor, cultural resilience, and everyday acts of spatial and political agency. This thesis explores how communities in Kalma have remade space, redefined home, and preserved identity in the face of prolonged uncertainty. Drawing on ethnographic fieldwork, spatial analysis, and critical urban theory, it situates Kalma not as an exception, but as a generative urban formation—an emergent city born from the margins.&#13;
Through chapters that trace the camp’s spatial evolution, intergenerational understandings of belonging, informal governance, cultural production, and political expression, this research challenges dominant humanitarian paradigms that treat camps as temporary and peripheral. It argues that residents are not passive recipients of aid, but planners, builders, and cultural producers who contest displacement through care, memory, and infrastructure. By threading together theoretical insights from scholars such as Malkki, Bhabha, Roy, and Simone with grounded narratives from Kalma, the study reveals how displacement can also be a site of urban possibility.&#13;
In reframing camps like Kalma as sites of urban life, not despite the crisis, but through it, this thesis calls for a fundamental shift in how urban planners, humanitarian actors, and scholars engage with protracted displacement. It invites us to see resilience as planning, care as governance, and the camp not as a space of suspension, but as a place where new urban futures are already being forged.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162102</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in International Macroeconomics</title>
<link>https://hdl.handle.net/1721.1/162101</link>
<description>Essays in International Macroeconomics
Gertler, Sarah M.
How do exchange rates and tariffs shape the economy? Their effects both independently and in regard to each other are puzzling with respect to classical economic models. This dissertation focuses on how macroeconomic factors -- sticky prices (Chapter 1), interest rates (Chapter 2), and scale economies (Chapter 3) -- inform the scope of exchange rates and tariffs. Unveiling these factors helps illuminate the nature of these shocks' influence.&#13;
&#13;
Chapter 1: The first chapter revisits the classic relationship between exchange rate pass-through, how exchange rates influence prices, and expenditure-switching, the resulting substitution between home and foreign goods. Expenditure-switching is the main channel through which exchange rates transmit to the real economy. Conventional wisdom holds that this channel's strength is increasing in exchange rate pass-through into prices: assuming the import demand elasticity is independent of pass-through, larger effects of exchange rates on prices yield larger substitution of spending between domestic and foreign goods. In this paper, I show that this conventional wisdom does not hold. Using confidential US micro-data and a panel-data local projection technique, I show that quantity-exchange rate elasticities are similar across high and low pass-through environments. In essence, low pass-through is subject to a larger import demand elasticity than is high pass-through. I then propose an extension of a standard small open economy New Keynesian model by adding a layer of import buying (retail) firms, in which both exporting and importing firms are subject to price rigidities. I show empirically and theoretically that the ``import buyer rigidity" dampens overall adjustment, but less so under low pass-through because in this case the pass-through is more persistent. The model thus accounts for why the quantity-exchange rate elasticities are similar across pricing regimes. I conclude by exploring the implications of this framework for monetary and exchange rate policy, actually finding a stronger expenditure-switching channel under low pass-through.&#13;
&#13;
Chapter 2: The second chapter, joint with Victor Orestes, documents how currency markets and trade flows respond to tariffs imposed by and on the US as related to other countries' macrofinancial position. We show that countries which maintain higher interest rates than the US depreciate much more strongly -- to the point of offsetting the tariffs on impact -- than their low-interest counterparts. However, these effects are not as persistent as the tariff shocks. Our results highlight a US hegemonic asymmetry: tariffs imposed on the US have little effect on currency markets, US demand for high-interest countries' goods is relatively elastic, but the latter's demand for US exports is not. Monetary policy can be an effective tool to target the exchange rate fluctuation as it has a similar incidence as tariffs. Finally, we present evidence that the interest rate analysis could draw from trade-network fundamentals. To rationalize our findings, we modify a baseline model of exchange rate determination using the interest rate as a "sufficient statistic" wedge in fundamentals. Our model indicates that the financial market imperfections we observe in data distort the global response to tariff escalation.&#13;
&#13;
Chapter 3: The third chapter proposes an answer to the question of why there is complete long-run pass-through of both tariffs and exchange rates in US exports, despite evidence of flexible markups. I develop a methodology to leverage tariffs and exchange rates to uncover the structural drivers of pass-through, the markup elasticity and the marginal cost scale elasticity. I derive and quantify the scale channel of pass-through, which can be decomposed into a bilateral scale and the novel "shock span" scale effect. The shock span channel arises because different correlation patterns across customers enters prices via the scale channel. Because exchange rates are correlated across trading partners, compared to tariffs they have greater capacity for shock-span effects of scale economies. Quantifying the bilateral and shock span components of the scale channel, the paper demonstrates that scale economies can rationalize the discrepancy between markup flexibility and observed pass-through.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162101</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Technology and Trade</title>
<link>https://hdl.handle.net/1721.1/162100</link>
<description>Essays on Technology and Trade
Kikuchi, Shinnosuke
This thesis consists of essays on technology and trade. In Chapter 1, I study how technology in the 21st century has changed the pattern of trade. I document that skill-abundant countries no longer have a comparative advantage in skill-intensive sectors. While this empirical relationship was strong in the 1980s, it weakened in the 1990s and disappeared by the 2000s. The decline is more pronounced in countries and sectors with higher automation. I find no such heterogeneous effects among countries and sectors more exposed to offshoring. Using a quantitative trade model incorporating automation and offshoring, I confirm that the observed changes in automation can account for the evolution of comparative advantage while observed changes in offshoring cannot. I conclude by revisiting the relationships between globalization, technology, and inequality through this model. Automation increases skill premia in developed countries with high automation and also raises welfare globally, whereas offshoring leads to smaller, more evenly distributed welfare gains.&#13;
&#13;
In Chapter 2 (joint with Daniel G. O'Connor), we turn to the geographic consequences of technology and trade by analyzing the role of granularity—the dominance of a few large firms in local labor markets. We propose a new economic geography model featuring granular firms subject to idiosyncratic shocks. We show that average wages increase in the size of the local labor market due to that granularity, and provide a sufficient statistic for the contribution of our mechanism. We further prove that too few firms enter in equilibrium. Using Japanese administrative data on manufacturing, we provide evidence consistent with our mechanism and quantify it. Our mechanism implies that markets with around 2 firms per sector have an elasticity of wages to population of 0.05 and firms capture only 85% of their contribution to production in profits. In large markets like Tokyo, the elasticity is around 0.001, and firm entry is approximately efficient. Enacting optimal place-based industrial policy would increase the number of firms in modest-sized cities by more than 30% and actually decrease the number of firms and people in Tokyo.&#13;
&#13;
In Chapter 3 (joint with Sagiri Kitao), we study the distributional consequences of technological and trade-induced polarization—wage and employment losses of middle-class workers relative to low- and high-skill groups. We build a model of overlapping generations who choose consumption, savings, labor supply, and occupations over their life-cycles, and accumulate human capital. We simulate a wage shift observed since the early 1980s and investigate individuals' responses. Polarization improves welfare of young individuals that are high-skilled, while it hurts low-skilled individuals across all ages and especially younger ones. The gain of the high-skilled is larger for generations entering in later periods, who can fully exploit the rising skill premium.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162100</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of Chromatin Landscape on and by the Human Sex Chromosomes</title>
<link>https://hdl.handle.net/1721.1/162099</link>
<description>Regulation of Chromatin Landscape on and by the Human Sex Chromosomes
Bokil, Neha Vijay
Sex chromosome constitution is the largest and oldest source of genetic variation in the human population. One sex chromosome—the “active X” (Xa)—is present in all individuals. The second sex chromosome differs between sexes; most males have a Y while most females have a second X, which adopts a distinct conformation from Xa and is termed the “inactive X” (Xi). Despite its name, the human Xi expresses ~20% of its genes. Xi-expressed genes and their Y homologs play critical gene regulatory roles. Examining mechanisms and effects of Xi gene expression is essential to understanding these functions. In this thesis, I investigate chromatin landscape across the human Xi to identify features of Xi-expressed and Xi-silent genes; I also interrogate the role of an Xi-expressed gene and its Y homolog in regulating chromatin genome-wide. To examine chromatin state differences between Xi-expressed and Xi-silent genes, we quantified H3K4me3, H3K27me3, and CTCF along Xi by linear modeling in cells of individuals with zero to three Xis. We demonstrate that Xi-expressed genes are enriched for H3K4me3 compared to Xi-silent genes. Moreover, Xi-silent genes near strongly Xi-expressed genes have higher H3K27me3 than other Xi-silent genes. CTCF shields strongly Xi-expressed gene promoters from surrounding heterochromatin. We propose a framework associating combinations of chromatin marks with subcategories of Xi-expressed and Xi-silent genes. A key Xi-expressed gene, KDM6A, encodes an H3K27me3 demethylase—enabling Xi to impact chromatin structure genome-wide. Its Y homolog, UTY, is thought to encode a catalytically dead enzyme. However, we demonstrate that Xi and Y copy number-dependent changes to H3K27me3 across autosomes are strongly correlated. Moreover, KDM6A knockdown results in increased H3K27me3 at similar genomic regions as UTY knockdown. We posit that KDM6A and UTY share demethylase-dependent functions. Deciphering features and genome-wide effects of Xi expression is essential to understanding fundamental mechanisms of gene regulation and the shared and differential roles of the sex chromosomes outside the reproductive tract. This work highlights critical chromatin-level differences between Xi-silent and Xi-expressed genes, the effects of an Xi-expressed gene on chromatin structure genome-wide, and striking similarities between Xi and Y in modulating autosomal chromatin structure and gene expression.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162099</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Econometrics and Policy Evaluation</title>
<link>https://hdl.handle.net/1721.1/162098</link>
<description>Essays on Econometrics and Policy Evaluation
Vives-i-Bastida, Jaume
This thesis consists of four chapters that study the statistical properties of synthetic control methods and their application to public policy evaluation and the digital economy. &#13;
&#13;
The first chapter, co-written with Ahmet Gulek, proposes a Synthetic Instrumental Variables (SIV) estimator for panel data that combines the strengths of instrumental variables and synthetic controls to address unmeasured confounding. We derive conditions under which SIV is consistent and asymptotically normal, even when the standard IV estimator is not. Motivated by the finite sample properties of our estimator, we introduce an ensemble estimator that simultaneously addresses multiple sources of bias and provide a permutation-based inference procedure. We demonstrate the effectiveness of our methods through a calibrated simulation exercise, two shift-share empirical applications, and an application in digital economics that includes both observational data and data from a randomized control trial. In our primary empirical application, we examine the impact of the Syrian refugee crisis on Turkish labor markets. Here, the SIV estimator reveals significant effects that the standard IV does not capture. Similarly, in our digital economics application, the SIV estimator successfully recovers the experimental estimates, whereas the standard IV does not.&#13;
&#13;
The second chapter, co-written with Ignacio Martinez, proposes a Bayesian alternative to the synthetic control method and explores the frequentist properties of the method in the context of linear factor models. In this chapter, we characterize the conditions&#13;
on the factor model primitives (the factor loadings) for which the statistical risk minimizers are synthetic controls (in the simplex). Then, we propose a Bayesian alternative to the synthetic control method that preserves the main features of the standard method and provides a new way of doing valid inference. We explore a Bernstein-von Mises style result to link our Bayesian inference to the frequentist inference. For linear factor model frameworks we show that a maximum likelihood estimator (MLE) of the synthetic control weights can consistently estimate the predictive function of the potential outcomes for the treated unit and that our Bayes estimator is asymptotically close to the MLE in the total variation sense. Through simulations, we show that there is convergence between the Bayesian and frequentist approach even in sparse settings. Finally, we apply the method to re-visit the study of the economic costs of the German re-unification and the Catalan secession movement. The Bayesian synthetic control method is available in the bsynth R-package.&#13;
&#13;
The third chapter, recognizes that synthetic control methods often rely on matching pre-treatment characteristics (called&#13;
predictors) of the treated unit, and that the choice of predictors and how they are weighted plays a key role in the performance and interpretability of synthetic control estimators. This chapter proposes the use of a sparse synthetic control procedure that penalizes the number of predictors used in generating the counterfactual to select the most important predictors. I derive, in a linear factor model framework, a new model selection consistency result and show that the penalized procedure has a faster mean squared error convergence rate. Through a simulation study, I then show that the sparse synthetic control achieves lower bias and has better post-treatment performance than the unpenalized synthetic control. Finally, I apply the method to revisit the study of the passage of Proposition 99 in California in an augmented setting with a large number of predictors available.&#13;
&#13;
The fourth chapter, co-written with Alberto Abadie, proposes a set of simple principles to guide empirical practice in&#13;
synthetic control studies. The proposed principles follow from formal properties of synthetic control estimators, and pertain to the nature, implications, and prevention of over-fitting biases within a synthetic control framework, to the interpretability of the results, and to the availability of validation exercises. We discuss and visually demonstrate the relevance of the proposed principles under a variety of data configurations.&#13;
JEL: C23, C26, C11, C52.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162098</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Flow in Particle Collisions</title>
<link>https://hdl.handle.net/1721.1/162097</link>
<description>Energy Flow in Particle Collisions
Metodiev, Eric Mario
In this thesis, I introduce a new bottom-up approach to quantum field theory and collider physics, beginning from the observable energy flow: the energy distribution produced by particle collisions. First, I establish a metric space for collision events by comparing their energy flows. I unify many ideas spanning multiple decades, such as observables and jets, as simple geometric objects in this new space. Second, I develop a basis of observables by systematically expanding in particle energies and angles, encompassing many existing observables and uncovering new analytic structures. I highlight how the traditional criteria for theoretical calculability emerge as consistency conditions, due to the redundancy of describing an event using particles rather than its energy flow. Finally, I propose a definition of particle type, or flavor, which makes use of only observable information. This definition requires refining the notion of flavor from a per-event label to a statistical category, and I showcase its direct experimental applicability at colliders. Throughout, I synthesize concepts from particle physics with ideas from statistics and computer science to expand the theoretical understanding of particle interactions and enhance the experimental capabilities of collider data analysis techniques.
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162097</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Preoptic Neurocircuit that Regulates Blood Glucose Homeostasis</title>
<link>https://hdl.handle.net/1721.1/162096</link>
<description>A Preoptic Neurocircuit that Regulates Blood Glucose Homeostasis
Roessler, Julian McFadden
The preoptic area (POA) is the core thermoregulatory center of all known endothermic species and balances heat generation and cooling in response to environmental stimuli. This delicate balance is executed via a brain-body exchange of sensory information and thermoregulatory output that is intimately connected to the nutritional state of the organism. When faced with food deprivation, certain endotherms engage in torpor, a behavior in which body temperature and metabolic rate are substantially depressed to improve the probability of organismal survival. Induction of torpor is regulated by anteroventral POA (avPOA) Vglut2⁺/Adcyap1⁺ neurons, which are necessary and sufficient to induce this state. How these neurons regulate the metabolic depression observed during torpor remains poorly understood. In this work, we show that activation of avPOA_Vglut2/PACAP neurons results in temperature-independent changes in whole-body changes in fuel usage, from glucose to fatty acids, driven predominantly via insulin signaling defects in skeletal muscle. This metabolic shift is executed via engagement of the hypothalamic-pituitary-adrenal axis, and impairment of this process via silencing of avPOA_Vglut2/PACAP neurons results in a loss of fasting glucose homeostasis. Taken together these results nominate torpor-associated avPOA_Vglut2/PACAP neurons as core regulators of glucose homeostasis, and provide a basis for understanding how endotherms utilize hierarchical control of metabolism to tune energy expenditure and survive extreme periods of energetic deprivation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162096</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing population-level variation in mRNA splicing and&#13;
implications for human genetic interpretation</title>
<link>https://hdl.handle.net/1721.1/162095</link>
<description>Characterizing population-level variation in mRNA splicing and&#13;
implications for human genetic interpretation
Jacobs, Hannah N.
Alternative splicing is when a single gene sequence gives rise to multiple RNA sequences. DNA mutations in this gene sequence can alter this process, shifting the relative usage of RNA sequences. This relative usage is called percent spliced in (PSI). Sometimes changes in PSI triggers a change in function, happening at the level of a cell, organism, or of fitness. The consequences of splicing variability, and the contribution of genetic variation to this process, remains incompletely characterized.&#13;
&#13;
In this thesis, we seek to characterize the splicing events specifically present in a subset of the human population. We use the Genotype-Tissue Expression project (GTEx), which encompasses genomic DNA sequence information and bulk mRNA data from 49 tissues in 838 individuals. In this dataset we implement a 3-component beta-binomal model using RNA-sequencing reads, at a tissue-specifc level, to reliably call splicing events present in a subset of the samples within a tissue. We call these naturally variable exons (NVEs), and identify a total 57,271 unique NVEs in GTEx. We find NVEs in a large portion of the transcriptome, existing in 75% of all protein-coding genes.&#13;
&#13;
The beta-binomal model generates a population distribution of each NVE and we leverage that to estimate an NVE frequency at a PSI level of interest. This enables us to compare NVEs by their frequencies. We find that NVEs either tend to be rare in frequency ( ≤ 10%) in the population) or quite high in frequency ( ≥ 90%). We find that NVEs tend to be in 5' untranslated regions at higher frequencies, and tend to be in coding regions at lower frequencies. &#13;
&#13;
60% of NVEs have been previously found to be modulated by genetic variants. We find that proximity to a splice site is one of the most important predictors in predicting if a genetic variant will impact splicing in GTEx, which enables better predictions over existing methods (increase in AUC by 0.39). Surprisingly, we find that NVEs tend to be in genetically constrained genes (depleted of loss-of-function mutations), with the lowest frequency NVEs occurring in the most constrained genes. We find a subset of genetically-modified NVEs that target genes in a manner consistent with inducing nonsense-mediated decay (NMD). We highlight a couple of such variants linked to diseases, such as those associated with heart disease. &#13;
&#13;
These findings demonstrate that quantifying the population frequency of splicing events can reveal novel axes of molecular variability, and provide potential insight into the evolution of alternative splicing.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162095</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical Characterization of the DUF3328 Protein in the Biosynthesis of Cyclic Peptide Cyclochlorotine</title>
<link>https://hdl.handle.net/1721.1/162094</link>
<description>Biochemical Characterization of the DUF3328 Protein in the Biosynthesis of Cyclic Peptide Cyclochlorotine
Huang, Wentao
Cyclic peptide natural products are valuable sources for medicine. They exhibit significant biological and chemical diversity. Cyclochlorotine, a fungal cyclic pentapeptide produced by Talaromyces islandicus, possesses unique structural modifications, including dichlorination and hydroxylation, yet the enzymatic basis for these transformations remains poorly understood. This study biochemically characterizes the Domains of Unknown Function 3328 (DUF3328) protein family and investigates its role in cyclochlorotine biosynthesis.&#13;
Through transcriptomic sequencing and CRISPR/Cas9 knockout experiments, I revealed that CctP2 is essential for chlorination and CctR is required for hydroxylation. Computational sequence and structural analyses using AlphaFold suggested that DUF3328 proteins contain a conserved HxxHC(x)nHxxHC motif, a putative metal-binding site. Structural modeling further indicated that DUF3328 proteins form disulfide-linked homodimers, an unusual feature among biosynthetic enzymes.&#13;
To elucidate their biochemical roles, I purified CctR and CctP2 from Sf9 insect cells, overcoming challenges posed by their membrane association and intrinsic disorder. In vitro assays demonstrated that CctR is a copper-dependent enzyme that hydroxylates cyclochlorotine, and dimerization is essential for the activity. Mechanistic studies using isotopic labeling confirmed dioxygen as the oxygen source. Copper redox cycling was found to be essential, with Cu(I) required for the catalysis.&#13;
This work establishes DUF3328 proteins as a new class of copper-dependent enzymes involved in fungal secondary metabolism. The discovery of their catalytic mechanisms expands our understanding of enzymology and provides a foundation for future enzyme characterization in this family. More broadly, this study highlights the power of computational tools such as AlphaFold in guiding the functional characterization of previously uncharacterized protein families.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162094</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Spatial Economics</title>
<link>https://hdl.handle.net/1721.1/162093</link>
<description>Essays on Spatial Economics
O'Connor, Daniel
This thesis comprises three chapters, each studying optimal policy in a model of spatial economics. The first chapter considers the policy problem of a country looking to influence the geopolitical actions of another country. The second chapter considers how a central government should design place-based transfers to fight local recessions. And the third chapter considers how granularity affects the geography of economic activity and what that might mean for optimal place-based policy.&#13;
&#13;
In the first chapter (joint with John Sturm Becko), we suppose a country anticipates that it may use trade as a point of&#13;
leverage in future geopolitical conflicts. How should it develop domestic industries and international trading relationships today in order to strengthen its hand tomorrow? Domestically, we show that the country should abstain from peacetime industrial policies if it can credibly threaten trade taxes as geopolitical punishments during conflict, but not otherwise. Internationally, its peacetime trade policy should promote the accumulation of foreign capital that makes foreign prices---not foreign welfare---more sensitive to trade during conflict. We apply these insights to provide the first quantitative exploration of the US's optimal policies for building geopolitical power vis-à-vis China. The optimal policy promotes US-China trade on both the import and export margins.&#13;
&#13;
In the second chapter, I note that many regions in the US experience depressed labor demand and high unemployment, even when the rest of the United States does not. How should the US government respond? In this chapter, I characterize optimal place-based transfers in a dynamic economic geography model with nominal wage rigidity and compare them to observed government transfers. I show that transfers not only have a stimulus effect—by boosting local demand—but also a migration effect—by encouraging local residents to stay. Analytically, I provide optimal transfer formulas that capture this trade-off and show, perhaps surprisingly, that the optimal transfer to a distressed region may be a tax due to the migration effect. All else equal, transfers should be larger in the short-run and when there are distressed regions nearby. Quantitatively, I find that observed transfers are both too small in the short-run and too large in the medium-run, achieving just over half of the gains from the fully optimal response to idiosyncratic local shocks. I conclude by exploring how the US government could have responded to the China trade shock in the 2000s.&#13;
&#13;
In the third chapter (joint with Shinnosuke Kikuchi),  we ask how does the fact that individual firms dominate labor markets affect the geography of economic activity? And what does it mean for the efficiency of firm entry? To answer these questions, we propose a new economic geography model featuring granular firms subject to idiosyncratic shocks. We show that average wages increase in the size of the local labor market due to that granularity, and provide a sufficient statistic for the contribution of our mechanism. We further prove that too few firms enter in equilibrium. Using Japanese administrative data on manufacturing, we provide evidence consistent with our mechanism and quantify it. Our mechanism implies that markets with around 2 firms per sector have an elasticity of wages to population of 0.05 and firms capture only 85% of their contribution to production in profits. In large markets like Tokyo, the elasticity is around 0.001, and firm entry is approximately efficient. Enacting optimal place-based industrial policy would increase the number of firms in modest-sized cities by more than 30% and actually decrease the number of firms and people in Tokyo. JEL Codes: F1, E3, R1.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162093</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of a Diamond Proton Recoil Telescope for DT Neutron Measurements in the LIBRA Experiment</title>
<link>https://hdl.handle.net/1721.1/162092</link>
<description>Characterization of a Diamond Proton Recoil Telescope for DT Neutron Measurements in the LIBRA Experiment
Edwards, Emily
The LIBRA project investigates tritium breeding using beam-target style DT neutron generators to irradiate molten salt vessels. A critical aspect of understanding this process is the characterization of the energy and flux anisotropies within the neutron environment, which are inherent to the beam-target neutron generation method. These spectral and flux characteristics directly impact tritium production and the interpretation of experimental results, which makes the neutron field characterization essential for a complete understanding of the tritium breeding system. This paper presents the use of an sCVD diamond detector and an sCVD diamond proton recoil telescope to characterize the neutron environment produced by the DT neutron generator employed in the LIBRA experiments. The results of these measurements provide insight into the neutron flux and energy distributions incident on the breeding salt, enabling a more complete understanding of the neutron input in the LIBRA experimental tritium breeding process.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162092</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Novel Technologies to Investigate DNA Double-Strand Break Repair Uncovers a Role for the ATM Kinase in Error-Free NHEJ with Implications for Neurodegenerative Diseases</title>
<link>https://hdl.handle.net/1721.1/162091</link>
<description>Development of Novel Technologies to Investigate DNA Double-Strand Break Repair Uncovers a Role for the ATM Kinase in Error-Free NHEJ with Implications for Neurodegenerative Diseases
Kruswick, Alex J.
DNA double strand breaks (DSBs) are considered to be the most lethal genotoxic lesion because they can result in chromosomal translocations or result in a major loss of genetic information if repaired incorrectly. To preserve genomic integrity, mammalian cells have evolved a set of complimentary and redundant repair pathways that faithfully repair DSBs. Consequently, eukaryotic cells utilize an evolutionary conserved set of protein kinase signaling pathways that recognize and respond to DNA damage by pausing cell cycle progression and recruiting DNA repair machinery to ultimately determine the fidelity of DSB repair. Mutations and/or acquired defects that compromise the function of DNA damage response (DDR) pathways result in enhanced mutagenesis and underlie the development and progression of cancer and neurodegenerative conditions. How cells chose which DSB repair pathways to use when fixing a DSB in order to maximize repair fidelity is incompletely understood.&#13;
To better understand how cells decide which repair pathway to use when fixing DSBs and to specifically investigate protein kinase signaling that coordinates DSB repair pathway selection, we developed a set of multicolor fluorescent reporter systems, named DSB-Spectrum and DSB-Prism. DSB-Prism is uniquely designed to report on the choice between DSB repair via error-free non-homologous end joining (EF-NHEJ), mutagenic end joining (mut-EJ), alternative end joining (alt-EJ), homologous recombination (HR), and single strand annealing (SSA) at a single break created within individual cells by CRISPR-Cas9. We demonstrate that DSB-Prism robustly reveals patterns of DSB repair pathway compensation following chemical inhibition or genetic perturbation of DDR repair factors.&#13;
We report that the majority, but not all, EF-NHEJ repair requires DNA-PKcs. We observed that DNA-PKcs kinase activity is essential for its function in EF-NHEJ repair, while autophosphorylation of DNA-PKcs on the previously mapped ABCDE phosphorylation site cluster plays only a minor role in this process primarily through the Ku80 DNA-PKcs long range synaptic complex.&#13;
We utilized DSB-Prism to uncover a novel role for the ATM kinase in promoting EF-NHEJ repair at highly transcribed genes. We show that ATM promotes EF-NHEJ repair via two genetically distinct pathways independently of DNA-PKcs kinase signaling. First, ATM promotes EF-NHEJ through a phosphorylation-dependent interaction between 53BP1 and RIF1 independently of the Shieldin and CST complexes, which we propose serves to physically hold DSB ends together in a redundant manner with the core NHEJ-mediated end synapsis machinery. Second, we propose that ATM promotes EF-NHEJ via promoting R-loop resolution by both SETX and ERCC6L2. We show that the role of ATM in promoting EF-NHEJ is largely independent of MRN-dependent ATM activation, and completely independent of ROS-dependent ATM activation. We discover a novel N-terminal set of positively charged residues that we propose directly interact with the negatively charged DNA phosphate backbone adjacent to DSBs in order to activate ATM. These N-terminal positively charged residues, in combination with MRN, promote binding of ATM to chromatin but are particularly important for ATM’s function in promoting EF-NHEJ repair within the DSB-Prism reporter. &#13;
Finally, we characterized a cohort of ATM patient mutations and observe that the ability of ATM mutants to promote EF-NHEJ perfectly correlates with patient clinical A-T disease severity. We propose that this loss of EF-NHEJ repair is a major mechanistic cause of Purkinje cell death and cerebellar neurodegeneration observed in A-T patients.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162091</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contested Values of Eco-Developments: Leveraging Private Finance to Integrate Biodiversity into Nusantara’s City Development Framework</title>
<link>https://hdl.handle.net/1721.1/162090</link>
<description>Contested Values of Eco-Developments: Leveraging Private Finance to Integrate Biodiversity into Nusantara’s City Development Framework
Leung, Yu Hang (Hannah)
Rapid expansion of urban populations has spurred the construction of new cities, contrasted with the heightened urgency to adopt climate risk mitigation and disaster resilience strategies. Along with the global need for Nature-based Solutions (NbS), new eco- developments which are planned within biodiversity hotspots should adopt resilient climate adaptation strategies for long term benefits. However, these projects are often not financially justified or positioned to sustain long investments and holding periods. This thesis examines development of Ibu Kota Nusantara (IKN) in Indonesia as an evolving eco-development case study on how biodiversity could be repositioned as a key aspect in investment frameworks.&#13;
Developing new cities and eco-developments tend to rely on external investments, as internal structures are navigate the challenges of rapid growth while seeking a self-sustainable equilibrium. For IKN, private investors hesitate to invest in a project that is situated in an unstable political landscape, while low government expenditure and poor governance structures has marred development progress. Based on the inherent need to build to support a growing urban population, this multidisciplinary thesis explores three components that are needed to design an eco-development project - namely consistent way to value biodiversity in comparison to development values, proper environmental governance, and sustainable financial instruments to support the initial and operational expenditures of a project. Measurement approaches such as GBS-FI and S&amp;P NBS are able to streamline corporation’s dependency value of biodiversity, based on  valorization models developed by SEEA-EA and the United Nation’s Integrated National Financing Framework. A mixed-methods approach of qualitative case study analysis and in-depth review of existing and potential financial instruments is used to understand the demand and supply side of eco-developments. A&#13;
Contingent Valuation Method of assessing buyers’ Willingness-To-Pay in addition to qualitative questionnaire on perceived values of biodiversity provides insights on local understanding and WTP of premiums in support of elevated costs of eco-developments. The intention of this research is to explore how biodiversity could be recentered as a foundational element to sustainable development of cities. More broadly, this research seeks to synthesize the interdisciplinary discussions around development, environmental policy and ecological planning, while evaluating the feasibility of innovative financial mechanisms to mobilize capital for large-scale eco-development projects.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162090</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Assessment of Digital Age Inclusion: Topic Modeling Seoul’s Digital Governance Platform to Evaluate Elderly Representation</title>
<link>https://hdl.handle.net/1721.1/162089</link>
<description>Data-Driven Assessment of Digital Age Inclusion: Topic Modeling Seoul’s Digital Governance Platform to Evaluate Elderly Representation
Lim, Sungmoon
This paper examines the intersection of population aging and digital civic government in Seoul, South Korea. As cities worldwide digitize and age simultaneously, understanding elderly citizens' representation in digital governance platforms becomes critical for inclusive urban governance. As a leader in both aging and urban technologization, Seoul serves as an ideal case study. Combining computational analysis of civic queries with qualitative interviews, this study investigates whether elderly residents' concerns are adequately represented in Seoul's e-government platform. Comparing these datasets reveals significant disparities in how elderly concerns are represented digitally: despite Seoul's technological sophistication and digital inclusion efforts, substantial gaps remain in representing elderly citizens' concerns in governance forums, signaling gaps that may undermine age-inclusive development. This research contributes to theoretical understandings of digital democracy and urban aging while offering practical insights for designing more inclusive systems that address the realities of dual urban phenomena—aging and digitization—as they coalesce in cities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162089</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Fluid Dynamics Modeling of Compact&#13;
Steam Generators</title>
<link>https://hdl.handle.net/1721.1/162088</link>
<description>Computational Fluid Dynamics Modeling of Compact&#13;
Steam Generators
Jiragoontansiri, Witiwat
Compact Steam Generators (CSGs) are vital components in Small Modular Reactors (SMRs), particularly within Integral Pressurized Water Reactor (iPWR) configurations where compactness and high performance are essential. This thesis explores the use of Multiphase Computational Fluid Dynamics (M-CFD) to simulate two-phase flow boiling in CSGs based on Printed Circuit Heat Exchanger (PCHE) technology. Using the commercial CFD code STAR-CCM+, two modeling approaches—the Volume of Fluid (VOF) model and the Two-Phase Thermodynamic Equilibrium (TPTE) model—are applied to simulate both adiabatic and heat transfer conditions within mini-channels. The simulations are validated against experimental data from two sources: an R-134a-based vertical test loop developed at MIT’s Greenlab and a water-based PCHE test section from Kromer’s prior work. Key two-phase flow parameters such as void fraction, pressure drop, and heat duty are evaluated and compared to experimental benchmarks. Calibration methodologies are implemented to improve predictive accuracy. The validated models are then used to simulate realistic CSG operating conditions based on Babcock \&amp; Wilcox and NuScale reactor designs. Results indicate that PCHE-based CSGs, despite being smaller, are capable of delivering favorable thermal and hydraulic performance, with slightly better results compared to the existing steam generator design. Overall, the study demonstrates the potential of M-CFD tools to support the design and optimization of CSGs for next-generation nuclear applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162088</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Farebox Freedom: An analysis of centralized fare policy interventions relative to the suburbanization of poverty</title>
<link>https://hdl.handle.net/1721.1/162087</link>
<description>Farebox Freedom: An analysis of centralized fare policy interventions relative to the suburbanization of poverty
Chachra, Vir
The United States is witnessing a shift in its geography of poverty, with suburban communities experiencing greater increases in poverty rates relative to urban cores. However, transit service and fare policies have not kept pace with this demographic shift, inadequately meeting the needs of a growing population of lower income riders in the suburbs, particularly those served by higher-cost modes like commuter rail.  &#13;
&#13;
This thesis confronts this evolving dynamic, bridging a research gap between transit fare policy and the suburbanization of poverty, analyzing seven transit systems across the US through a Spatial Difference in Differences research approach, revealing mode specific shifts in transit cost burdens from 2019 to 2021 and impacts of these shifts on social vulnerability as defined by the CDC. The thesis also explores federal policy pathways to create greater fare equity in light of this dynamic, either through supporting operations costs for transit agencies or through a flat-fare national transit pass for riders, akin to Germany's Deutschlandticket (D-ticket) program.&#13;
&#13;
Focusing on suburban commuter rail communities across the sampled networks, the analysis finds that in 2021, communities with only commuter rail access and higher-than-average social vulnerability scores were associated with approx. an 11% additional increase in transit cost burdens compared to all other groups while also experiencing an increase in transit cost burdens overall. Furthermore, a two-fold increase in transit costs as a share of median income in 2021 was correlated with an additional 7.4% rise in social vulnerability index scores for commuter rail communities, relative to those with access to other modes that are closer to the urban core. While these communities have a 38% lower social vulnerability score, the analysis estimated a 60% increase from 2019 to 2021, highlighting a disproportionate increase and challenging the assumption of the wealthy commuter rail suburb.  &#13;
&#13;
This increasing sensitivity to transit cost burdens points to a significant ongoing interaction between national trends of suburbanization of poverty and fare policy. Given that many transit agencies face funding constraints and are nationally inconsistent in their low-income fare programs, they may be structurally limited in their ability to address these disparities on their own. This analysis considers lessons from historical policies such as the National Mass Transportation Assistance Act of 1974 and recent international programs like Germany’s D-ticket, to suggest that federal support for transit operations—paired with inclusive, mode-agnostic fare programs—would help address these emerging inequities in transit affordability amid the suburbanization of poverty.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162087</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Mechanisms that Determine the Edge&#13;
Electron Density Profile in Tokamaks</title>
<link>https://hdl.handle.net/1721.1/162086</link>
<description>Understanding the Mechanisms that Determine the Edge&#13;
Electron Density Profile in Tokamaks
Miller Hernández, Marco Andrés
The interaction between the physics of plasma turbulence and that of atomic neutral dynamics, intrinsic to the tokamak edge, makes prediction of edge profiles difficult. It is unclear to what extent neutral ionization, as opposed to particle transport, is responsible for the build up of edge density gradients. To this end, this thesis combines electron and neutral measurements across the edge region with high-fidelity simulations of neutrals to study these processes in high density and high magnetic field plasmas on Alcator C-Mod. This is enabled by measurements of Lyman-α (Lyₐ) emission made by the LYMID camera, as well as measurements of electron density, nₑ, and electron temperature, Tₑ, by the edge Thomson scattering (ETS) system. These result in a large database of inferred neutral density, n₀ and ionization source, S_ion, as well as radial particle flux, Γ_D, and effective diffusivity, D_eff, for stationary periods. For selected discharges, these are used to impose additional constraints to simulations of neutral dynamics in the plasma edge using SOLPS-ITER. This methodology is used to examine stiffness in the edge gradients forming the so-called “pedestal" in the high-confinement mode (H-mode) in response to increased ionization. This phenomenon is found to be associated with changes to local particle transport, and is observed to be correlated with a local parameter governing the influence of turbulence from interchange instabilities as opposed to that resulting from drift-waves. Reaching the threshold in this parameter may be avoided through improved particle control and is found to also be highly dependent on the 2D distribution of neutrals in the unconfined plasma region. The competition between interchange modes and the drift-wave is probed on Alcator C-Mod through validation of a semi-empirical model for tokamak operational boundaries. The separatrix operational space (SepOS) model [Eich &amp; Manz, Nuclear Fusion (2021)] predicts boundaries for the L-H transition, the L-mode density limit, and the ideal MHD ballooning limit in terms of plasma quantities evaluated using separatrix parameters for a wide range of Alcator C-Mod plasmas. These boundaries are expressed in terms of dimensionless quantities borrowed from electromagnetic fluid drift turbulence (EMFDT) theory. The combined workflow of ETS and LYMID also allows for evaluation of quantities associated with plasma transport in connection with the plasma operational space. Experimental evidence of changes to particle transport near the boundaries is provided for the first time. Organization of Γ_D at the separatrix is observed in both H-modes and low-confinement modes (L-mode) for key dimensionless parameters. The model is also used to elicit the physics of high confinement regimes free of Type-I edge localized modes (ELMs). Databases of the transition to the improved-confinement (I-mode) and that between the Type-I ELMy H-mode and the Enhanced Dα (EDA) H-mode are studied using the SepOS framework. An empirical model for particle transport in the EDA H-mode to explain pedestal saturation in this regime is developed and tested. The findings are then leveraged for modeling of next-generation devices, with priority on core-edge integration and improved power handling. high confinement regimes free of Type-I edge localized modes (ELMs). Databases of the transition to the improved-confinement (I-mode) and that between the Type-I ELMy H-mode and the Enhanced Dₐ (EDA) H-mode are studied using the SepOS framework. An empirical model for particle transport in the EDA H-mode to explain pedestal saturation in this regime is developed and tested. The findings are then leveraged for modeling of next-generation devices, with priority on core-edge integration and improved power handling.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162086</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolving Concepts of the Public Interest in Comprehensive Planning</title>
<link>https://hdl.handle.net/1721.1/162085</link>
<description>Evolving Concepts of the Public Interest in Comprehensive Planning
Tagliani, Jessie
The public interest is an important, yet contested, concept in the field of planning. On the one hand, it offers a normative criterion against which planning decisions can be evaluated and is traditionally viewed as the source from which planners derive their authority. However, the precise nature of the concept is fiercely debated by both planning practitioners and theorists, with some going so far as to denounce its existence. Today, the increasingly pluralist and complex nature of communities lead to questions over the concept’s relevance and applicability. In the second half of the twentieth century, planning theoreticians began assembling a body of literature surrounding this concept, mostly in the form of typologies of the definitions that have been ascribed to the public interest However, my review of the literature revealed that the study of the public interest as a normative criterion for planning has almost entirely taken place in the realm of planning theory. Therefore, I sought to add to the empirical scholarship concerning the public interest by analyzing it from two angles: first, I sought to understand how the public interest as a historical concept has changed and evolved alongside the field of planning throughout the twentieth century. Second, I chose the field of comprehensive planning as my analytical lens due to its longevity across the history of the planning profession and its close affiliation to the concept of the public interest. Specifically, I sought to analyze how the public interest is manifested in a series of comprehensive plan documents and thereby illustrate how the concept’s operationalization has evolved over the course of the past half century of planning. I began my analysis by drawing on over fifty years of scholarship to construct my own typology of the main definitions of the public interest. I then applied these definitions to four different models of comprehensive planning that were developed between 1962 and 2012. I also obtained a second perspective on the evolution of the concept of public interest by examining a series of comprehensive plans adopted by the City of Annapolis between 1964 and 2022. The two analyses revealed very different trajectories in the evolution of the public interest as an empirical concept. On the one hand, the four models demonstrate a fairly linear evolution in what is constituted to be the substance and process of constituting the public interest, which can be broadly classified as achieving social equity, the responsible stewardship of natural resources, and authentic citizen involvement. By contrast, the five Annapolis comprehensive plans did not neatly follow the same evolution. Instead, a recurring concern for many of the Annapolis plans is the conservation of the physical city through the control of the city’s growth, the careful maintenance of its economy, and the preservation of its urban fabric. However, the more recent plans demonstrate a stronger commitment to the social values and processes espoused by the four planning models, indicating that there is growing consensus in the field of planning today regarding an empirical understanding of the public interest.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162085</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prioritizing Sidewalk Accessibility Improvements for the Aging Population and Individuals with Disabilities: A Case Study of Bandung, Indonesia</title>
<link>https://hdl.handle.net/1721.1/162084</link>
<description>Prioritizing Sidewalk Accessibility Improvements for the Aging Population and Individuals with Disabilities: A Case Study of Bandung, Indonesia
Kurniaputri, Aulia
While walking is fundamental to inclusive urban mobility, major cities in Indonesia continue to face challenges in providing barrier-free pedestrian infrastructure, even for individuals without physical impairments. As the population of older adults in Indonesia continues to grow, the risk of disability within this demographic will increase, contributing to the overall number of individuals with disabilities. In Bandung City, there is a rising awareness across various sectors of society regarding the rights of older adults and individuals with disabilities to navigate sidewalks safely. These trends highlight the importance of improving inclusivity on city streets, where people travel daily to reach their essential and desired destinations.&#13;
&#13;
This thesis explores an evidence-based methodology to prioritize sidewalk accessibility improvements for older adults and individuals with physical disabilities, aiming to develop a prioritization strategy that targets maximum impacts. Accessibility scores and pedestrian flow counts are calculated with the Urban Network Analysis (UNA) toolbox. Three types of user groups—non-disabled individuals, cane or crutch users, and wheelchair users—were assigned penalties for each type of barrier on a sidewalk segment, resulting in varying perceived distances. Those with physical mobility limitations perceived longer distances than those without. To identify priority locations, a system-selection ranking was applied that considered sidewalk segments with both high-frequency usage and significant discrepancies between actual and perceived lengths. The methods outlined in this thesis are scalable for use in other neighborhoods and cities, thereby supporting data-driven decision-making in pedestrian infrastructure improvements.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162084</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relationality and Reciprocity in Civic Design: Public Engagement and Offshore Wind Development in the Gulf of Maine</title>
<link>https://hdl.handle.net/1721.1/162083</link>
<description>Relationality and Reciprocity in Civic Design: Public Engagement and Offshore Wind Development in the Gulf of Maine
Bendixen, Amanda
Offshore wind projects are inherently complex, requiring the integration of social, environmental and technical planning. Meaningful engagement with communities is critical to ensuring procedural fairness, trust and equity throughout the development process. Yet, the role of civic design in shaping these outcomes remains unexplored. This thesis investigates how relationality and reciprocity are fostered through the civic design of public engagements for offshore wind development in the Gulf of Maine. Through qualitative analysis of public meeting transcripts – using thematic coding and memo writing in Atlas.ti – this study identifies civic design elements and recurring engagement themes. &#13;
&#13;
The findings highlight relational accountability as a mechanism for building trust, transparency and procedural fairness. They also explore how civic design can support reciprocity, while revealing how structural barriers can undermine relationality. This research demonstrates the possibilities and limitations of civic design in fostering relational and reciprocal public engagements. It concludes with recommendations for incorporating civic design elements that promote sustained, reciprocal relationships, accountability and long-term community involvement in offshore wind development.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162083</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Safety and Surveillance: New Possibilities for Public Light After Dark</title>
<link>https://hdl.handle.net/1721.1/162082</link>
<description>Beyond Safety and Surveillance: New Possibilities for Public Light After Dark
Corlett, Lucy
As cities refocus planning and design goals in response to evolving global standards for urban well-being, sustainability, and spatial equity, research on best practices and innovative considerations for the public realm has expanded. As a result, a new movement in research and guidance on public light has emerged. Rather than continuing to view lighting as a punitive means of enforcing surveillance and public safety, this movement in research and practice advances radically inclusive, responsive design methods that use light to redress inequality in the built environment. This thesis builds on a growing body of research that establishes the powerful influence of light on human experience and perception, initiating a dialogue between different models for place-based approaches to lighting design in shared public spaces. Drawing on in-depth studies of these models, interviews with stakeholders, scholarship, policy, and design and planning practice, this thesis recommends that city planners serve as the bridge between ideation and implementation in a new era of urban illumination.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162082</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Community Benefits Agreements for Equitable Renewable Energy Siting: The Importance of Negotiation Power and Stakeholder Engagement</title>
<link>https://hdl.handle.net/1721.1/162081</link>
<description>Community Benefits Agreements for Equitable Renewable Energy Siting: The Importance of Negotiation Power and Stakeholder Engagement
Paul, Sanjana
As renewable energy development accelerates across the United States, conflicts over project siting have become increasingly common; often rooted not in opposition to clean energy itself, but in concerns over fairness, community inclusion, and long-term accountability. This thesis investigates how Community Benefits Agreements (CBAs) can serve as tools to address these challenges, focusing on how negotiation dynamics, mediation, and stakeholder engagement shape the equity and enforceability of CBAs in renewable energy siting. Using a mixed-methods approach, this research draws on qualitative case studies, stakeholder interviews, and legal-policy analysis, alongside a limited quantitative assessment of CBA implementation outcomes. The study examines both the procedural and structural conditions that influence how benefits are negotiated, formalized, and monitored. By analyzing cases that include third-party facilitation, amendment mechanisms, and diverse stakeholder participation, the thesis identifies best practices for designing CBAs that move beyond performative engagement and toward genuine community empowerment. Ultimately, this research offers a multidimensional understanding of CBAs as emergent governance instruments situated at the intersection of infrastructure planning, environmental justice, and public accountability. It concludes by proposing a model state-level regulatory framework to support equitable CBA development and embed principles of justice into the future of renewable energy siting.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162081</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Public Health Governance at the Watershed Scale: Exploring Opportunities for Multi-sector Governance to Advance Planetary Health in Northeastern Massachusetts</title>
<link>https://hdl.handle.net/1721.1/162080</link>
<description>Public Health Governance at the Watershed Scale: Exploring Opportunities for Multi-sector Governance to Advance Planetary Health in Northeastern Massachusetts
Morales, Daniela
Many health and environmental regulations apply only within specific political or administrative boundaries, creating a mismatch between the spatial scale of natural systems which impact health and the spatial extents of relevant regulations. For example, in Massachusetts, local Boards of Health govern specific public health and environmental issues through spatialized regulatory powers that carry significant weight in both local and larger geopolitical contexts. Despite the fact that watershed management influences regional public health outcomes through impacts to water quality, water quantity, and climate resilience measures, the organizations focused on watershed management do not have influence that matches the power of public health entities. This thesis explores how watershed management decisions could have similar weight to other public health governance decisions by exploring the specific speculative case of what interest there is in, and what barriers there are to, watershed management organizations in Northeastern Massachusetts working as public health governing units, such as local Boards of Health. Using a mixed methods approach, combining organizational and policy analyses with semi-structured key informant interviews and surveys, I assessed the opportunities, barriers and interest for multi-sector watershed and health governance to advance Planetary Health in Northeastern MA. The findings showed low receptiveness towards adopting a new regional governance system due to both perceived and actualized legal, organizational and social barriers. The findings also highlighted an interest towards strengthening existing regional partnerships and building new collaborations across the fields of public health and watershed management for more effective approaches towards environmental health decision making. These results suggest a need for additional interdisciplinary training for both sectors, and the creation of new spaces and relationships for collaboration between actors involved in public health, watershed management, and related fields.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162080</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Silence to Sankofa: The Role of Archives in Addressing Urban Renewal’s Displacement History</title>
<link>https://hdl.handle.net/1721.1/162079</link>
<description>From Silence to Sankofa: The Role of Archives in Addressing Urban Renewal’s Displacement History
Mohamed, Menatalla
In the post-World War II era, urban renewal was designed as a path towards the revitalization of American cities through public investment into the redevelopment of ‘blighted’ areas. Through eminent domain takings, urban renewal projects led to the forced relocation of residents from their homes and neighborhoods, with a disproportionate impact on Black, immigrant, and low-income communities across the country. The archives of the renewal period hold the story of this widespread displacement and are of significant value for contemporary planning practice. Through the lens of two case studies, this thesis explores how and why urban renewal archives are being revisited today to address this displacement history through institutional and community approaches to memorialization. In Cambridge, MA, the Cambridge Redevelopment Authority (CRA) is an example of an agency drawing on its own archive to publicize its role in past forced relocation through its use of eminent domain. In Rochester, NY, Clarissa Uprooted is a public history and community building project centered around the story of Clarissa Street, a historically Black neighborhood that was demolished for renewal in the 1960s. Through document analysis and interviews, I examine how these efforts to activate urban renewal archives and better understand the scope and impact of forced relocation provide avenues for planners and community members to remember the past, acknowledge systemic harms, and reflect on repair. Despite the different positionalities of the CRA and Clarissa Uprooted, a comparative approach also highlights how both organizations have created opportunities to unearth histories of dissent to urban renewal, more fully recognize the legacy of commercial displacement, and imagine avenues to planning, policy, and institutional change. This research demonstrates the significance of local archival initiatives that draw upon the past to better position planners and communities to face the urban challenges and inequities of the present and future.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162079</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and oxidation behavior of Cr alloyed uranium borides at high temperatures</title>
<link>https://hdl.handle.net/1721.1/162078</link>
<description>Synthesis and oxidation behavior of Cr alloyed uranium borides at high temperatures
Moeykens, Riley S.
Following the nuclear accident at Fukushima Daiichi Power Station in 2011, an urgent need for safer, more economical, and versatile nuclear fuels has arisen. In recent years, uranium boride (as a tetraboride and diboride) has been further investigated as a candidate fuel form for its high thermal conductivity, high melting point, high uranium loading, and potential for dual use as a fuel and burnable absorber. In this work, the synthesis, structural behavior, and oxidation behavior of uranium borides and chromium- and yttrium- alloyed uranium borides are investigated. The structure of the synthesized uranium borides and chromium- and yttrium- alloyed uranium borides were probed using synchrotron X- ray Powder Diffraction (XRD) and Pair Distribution Function (PDF) analysis with in-situ heating. The methods and challenges in synthesizing uranium boride and chromium- and yttrium-alloyed uranium boride, as well as the consequential thermophysical and oxidation properties of these potential fuel forms, are elucidated in this work.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162078</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Semi-Autonomous, Highly Automated, and Remotely Operated (SAHARO) Nuclear Reactors</title>
<link>https://hdl.handle.net/1721.1/162077</link>
<description>Towards Semi-Autonomous, Highly Automated, and Remotely Operated (SAHARO) Nuclear Reactors
Hallinan, Aidan M.
In the United States, comprehensive reactor design certification, site permitting, and operating licensing processes exist to ensure the safe and reliable operation of nuclear power plants (NPPs). Most of these plants have belonged to the same design class: large, centrally located Light Water Reactors (LWRs). Thus, our regulatory processes were tailored for their phenomenology and the unique challenges associated with their operation and maintenance. However, these types of plants may be impractical for specific energy markets, where smaller, non-LWR, highly flexible, and multi-faceted NPPs can be more optimal. The novelty of these designs and their use cases has further inspired new operating paradigms, which will be referred to as Semi-Autonomous, Highly Automated, or Remote Operations (SAHARO) in this thesis. While some of these new reactors have seen limited progress in design certification and licensing efforts under current regulatory practices, there remains little precedent for these novel operating approaches. To facilitate discussion, guide designers, and inspire regulatory progress, I begin by looking at existing regulations, licensing practices, technical guidelines, and other rules that govern the NPP design and operations. I then dive into current applications and discussions of the sub-components of SAHARO, across different technical domains as well as nuclear power, to gather technical, operational, and regulatory insights. To provide reactor design evaluators with an additional tool, I define a Risk-Complexity Score (RCS), which couples simple system complexity quantification with existing risk measures and can support risk-informed system analyses. I then conduct an internet network Quality of Service (QoS) test to demonstrate one of the many important considerations for remote operations stress-testing, which proposes an approach for evaluation within the SAHARO licensing process: the “SAHARO Coping and Minimum Inventory Assessment Strategy.” Lastly, based on my literature and industry reviews, I have constructed a framework that informs reactor designers on how to iterate through the SAHARO-based design process, while also enabling vendor-regulator collaboration and shared learning. Ultimately, I aim to help designers and regulators in the nascent fields of autonomous, automated, and remote NPP operations identify the key questions these technologies and systems must address to ensure safe, effective, and practical application.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162077</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Economic Growth and Innovation</title>
<link>https://hdl.handle.net/1721.1/162076</link>
<description>Essays on Economic Growth and Innovation
Lensman, Todd
A foundational observation by Robert Solow holds that long-run economic growth is primarily driven by the innovation and adoption of new technologies (Solow, 1957). This set of essays provides new theory and evidence to explain how firms choose which technologies to innovate and adopt. A point of emphasis, particularly in the first two chapters, is that complementarities across firms play an important role in determining the rate and direction of technological change. These complementarities arise as firms build shared knowledge by innovating (Chapter 1) and from joint consumption of new products (Chapter 2). They provide a new channel through which market structure and property rights affect long-run technological change.&#13;
&#13;
Chapter 1. The first chapter is motivated by the observation that the direction of innovation shapes both current technologies and future innovation opportunities, as firms acquire expertise and create public knowledge through discovery. But how do firms choose which technologies to develop? Do they ever fail to exploit new technological paradigms? I build a new model of innovation and firm dynamics to study a novel link between market structure, the direction of innovation, and economic growth: Expertise in a current technology gives incumbents a comparative advantage at innovating it relative to entrants, who instead favor a new technology with higher growth potential. Each firm’s innovation decisions influence others through knowledge spillovers, so the initial market structure can affect the long-run direction of innovation. Concentrating R&amp;D resources in a small number of firms allows faster accumulation of expertise, raising growth when all firms innovate the same technology. But it can lower growth when firms face a technology choice, amplifying the influence of incumbents and potentially delaying or preventing the emergence of the new technology. I provide empirical evidence for the theory using data on firm patenting and R&amp;D expenditures. I also show that it explains the historical development of mRNA vaccines, and I explore its implications for the highly concentrated innovation of artificial intelligence.&#13;
&#13;
Chapter 2. In the second chapter, joint with Rebekah Dix, we observe that innovations often combine several components to achieve outcomes greater than the “sum of the parts.” We argue that such combination innovations can introduce an understudied inefficiency—a positive market expansion externality that benefits the owners of the components. We demonstrate the importance of this externality in the market for pharmaceutical cancer treatments, where drug combination therapies have proven highly effective. Using data on clinical trial investments, we document several facts consistent with inefficiently low private innovation: firms are less likely than publicly funded researchers to trial combinations, firms are less likely to trial combinations including other firms’ drugs than those including their own drugs, and firms often wait to trial combinations including other firms’ drugs until those drugs experience generic entry. Using microdata on drug prices and utilization, we quantify the externalities that arise from new combinations and find that the market expansion externality often dominates the standard negative business stealing externality, suggesting too little innovation in combination therapies. As a result, firms may have incentives to free ride off others’ innovation, which we analyze with a dynamic structural model of innovation decisions. We use the model to design cost-effective policies that advance combination innovation. Redirecting publicly funded innovation toward combinations with high predicted market expansion or consumer surplus spillovers minimizes crowd out of private investments, increasing the rate of combination innovation and total welfare while remaining budget neutral.&#13;
&#13;
Chapter 3. The final chapter, joint with Daron Acemoglu, considers incentives to adopt transformative technologies that promise to accelerate productivity growth across many sectors but also present new risks from potential misuse. We develop a multi-sector technology adoption model to study the optimal regulation of transformative technologies when society can learn about these risks over time. Socially optimal adoption is gradual and typically convex. If social damages are large and proportional to the new technology’s productivity, a higher growth rate paradoxically leads to slower optimal adoption. Equilibrium adoption is inefficient when firms do not internalize all social damages, and sector-independent regulation is helpful but generally not sufficient to restore optimality.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162076</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Industrial Organization</title>
<link>https://hdl.handle.net/1721.1/162075</link>
<description>Essays in Industrial Organization
Dix, Rebekah A.
This thesis comprises three essays on industrial organization. The first chapter, joint with Todd Lensman, studies the innovation of cancer drug combination therapies. Innovations often combine several components to achieve outcomes greater than the “sum of the parts.” We argue that such combination innovations can introduce an understudied inefficiency—a positive market expansion externality that benefits the owners of the components. We demonstrate the importance of this externality in the market for pharmaceutical cancer treatments, where drug combination therapies have proven highly effective. Using data on clinical trial investments, we document several facts consistent with inefficiently low private innovation: firms are less likely than publicly funded researchers to trial combinations, firms are less likely to trial combinations including other firms’ drugs than those including their own drugs, and firms often wait to trial combinations including other firms’ drugs until those drugs experience generic entry. Using microdata on drug prices and utilization, we quantify the externalities that arise from new combinations and find that the market expansion externality often dominates the standard negative business stealing externality, suggesting too little innovation in combination therapies. As a result, firms may have incentives to free ride off others’ innovation, which we analyze with a dynamic structural model of innovation decisions. We use the model to design cost-effective policies that advance combination innovation. Redirecting publicly funded innovation toward combinations with high predicted market expansion or consumer surplus spillovers minimizes crowd out of private investments, increasing the rate of combination innovation and total welfare while remaining budget neutral.&#13;
&#13;
The second chapter, joint with Kelsey Moran and Thi Mai Anh Nguyen, studies the interoperability of electronic health record systems. Interoperability—the ability of different systems to work together—is an increasingly vital component of product markets. We study the impact of interoperability frictions in the context of US hospital Electronic Health Record (EHR) systems. While use of EHR systems is widespread, interoperability of these systems remains low, particularly across those produced by different EHR vendors. We examine how interoperability affects patients by considering both a direct, technological effect of influencing health information exchange and an allocative effect of shifting the flow of patients across providers. Using an event study design in which interoperability between hospital pairs changes when one changes EHR vendors, we find evidence for both channels. When two hospitals switch to having the same EHR vendor, charges and readmissions rates for patients who are transferred and referred between them decrease by 4% and 11%, respectively. In addition, these hospitals now share 8% more inpatient transfers and 9-10% more referrals. This change in patient flows further affects patient outcomes: patient health improves when their sending hospitals switch to EHR vendors used by higher-quality hospitals in the market and worsens when the opposite occurs. To quantify the welfare gain from reducing interoperability frictions, we estimate a demand model of how patients and providers trade-off interoperability with other receiving hospital characteristics when choosing where to send patients. The model is identified by changes in patient flows following changes in hospital EHR vendors and interoperability levels. We show that eliminating all interoperability frictions would redirect 7.5% of patients to different hospitals and increase joint hospital-patient welfare by 21%, the equivalent of a 57-kilometer reduction in travel distance.&#13;
&#13;
The third chapter, joint with Roi Orzach, studies the relationship between the fares of direct and connecting flights. Airlines operate complicated flight networks, often utilizing hub-and-spoke systems to efficiently route connecting travelers and optimize costs. Despite the prevalence of connecting travelers—accounting for approximately one-third of passenger itineraries—most analyses of the welfare effects of changes in competition focus on nonstop routes. We show that when firms face capacity constraints or adjustment costs, a price decrease on a direct route may incentivize firms to decrease prices on indirect routes using this route as a leg. We document that this pass-through is positive using the price effects of low-cost carrier entry and airline mergers: connecting fares decrease after low-cost carrier entry on one of the legs and increase after a merger of carriers that competed on one of the legs. Our findings demonstrate that ignoring these network effects leads to significantly underestimating changes in consumer surplus—by up to 115%—in response to changes in competition. Thus, considering full airline networks is essential to accurately estimating the impact of changes in competition on consumers.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162075</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralizing Power: Enabling Local Energy Resilience and Equity in Accra</title>
<link>https://hdl.handle.net/1721.1/162074</link>
<description>Decentralizing Power: Enabling Local Energy Resilience and Equity in Accra
Kulkarni, Nikita
Over 600 million people in Sub-Saharan Africa lack access to electricity. While Ghana is projected to achieve universal access by 2030, this national milestone obscures lived experiences of energy insecurity— particularly in urban centers like Accra. Despite a reported 91% grid connection rate, only 17% of Accra’s households consider their electricity supply reliable (Afrobarometer, 2022). Traditional, binary metrics— focused solely on grid connection—fail to capture essential social dimensions such as reliability, affordability, equity, and resilience, particularly under intensifying climate and urban pressures. My thesis&#13;
investigates persistent energy insecurity in Accra, Ghana’s capital, through the lens of dumsor—a term used to describe recurring power outages that disrupt daily life and expose the fragility of the centralized&#13;
electricity system. Drawing on the frameworks of splintered urbanism and the techno-politics of infrastructure failure, the thesis explores how dumsor reflects institutional fragmentation, political contestation, and inequality in the energy infrastructure space. In response to dumsor, I examine whether decentralized energy systems, particularly solar, can offer a pathway to local energy resilience—defined here as the place-based capacity to withstand dumsor through cleaner, more affordable alternatives for sustainable and reliable power. The study combines a technical assessment of Accra’s solar potential with a critical analysis of policy frameworks, climate finance mechanisms, and political agendas. Grounded in fieldwork and interviews with stakeholders across the energy value chain—from regulators and municipal actors to utilities, solar providers, financiers, residents, and advocacy groups—my thesis identifies on-the-ground barriers to and opportunities for the energy transition. While distributed solar presents a promising alternative with broad reach, persistent challenges in affordability, coordination, and delivery capacity threaten its scalability. Without targeted policy interventions, there is a risk of reinforcing a new form of energy infrastructure splintering—where only the affluent benefit. My thesis concludes that addressing energy insecurity in Accra requires strategic institutional and policy reforms to reconfigure governance, empower municipalities, and enable inclusive financing and policy at the most local level to enable solar alternatives. Energy decentralization offers a promising path forward, but the thesis underscores the ongoing role of the state as a critical enabler of an energy transition that is sustainable and just.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162074</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Vacant to Valuable: Building Community Wealth through &#13;
Brownfield Redevelopment in Legacy Industrial Cities</title>
<link>https://hdl.handle.net/1721.1/162073</link>
<description>From Vacant to Valuable: Building Community Wealth through &#13;
Brownfield Redevelopment in Legacy Industrial Cities
Jex, Sara Lynn
Recent federal investments in domestic manufacturing have renewed economic interest in legacy industrial cities across the United States. As these places attract new development, it is critical to safeguard against repeating the harms of the 20th-century exodus of industry and manufacturing jobs—when offshoring, suburbanization, and discriminatory housing policies deepened spatialized racial and economic inequalities. How can communities retain the wealth generated by new industrial investments, even if companies leave? This thesis explores how industrial brownfield redevelopment might utilize community wealth-building (CWB) strategies to advance equitable economic development. Focusing on the work of the Site Readiness for Good Jobs Fund in Cleveland, Ohio—a nonprofit preparing long-vacant industrial land for job-dense uses—it examines the potential for mission-driven organizations to use brownfield redevelopment to anchor wealth locally and proactively resist displacement. By analyzing case studies in Buffalo, Milwaukee, Chicago, and Philadelphia, the research tackles three questions: How do mission-driven organizations deliver community benefits through industrial brownfield redevelopments? In what ways do CWB models reshape how capital flows through redevelopment projects? And, what questions and decisions must the Site Readiness Fund consider to build lasting community wealth in Cleveland? Findings suggest that industrial brownfield redevelopment, when paired with strategic partnerships, site control, and a clear vision, offers a unique opportunity to implement CWB models. These strategies can help mission-driven organizations redistribute the risks and rewards of necessary public investments in brownfields and build trust with the community, ensuring that residents surrounding these reactivated sites benefit not just from new jobs, but from ownership and long-term economic power over their futures. The thesis concludes by applying these lessons to the Site Readiness Fund, outlining potential paths forward that embed economic democracy in the redevelopment of Cleveland’s legacy industrial areas.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162073</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Burning S(e)oul: A Body for Cremation</title>
<link>https://hdl.handle.net/1721.1/162072</link>
<description>Burning S(e)oul: A Body for Cremation
Kwun, Namhi
Every year, there are over 70,000 fatalities around Seoul, with only two operating crematoria in the city, that is over 100 bodies a day each institution needs to process efciently. By May 26, it would have been six years since my grandfather was gone in those fames. Threading the remnants of mourning, Burning S(e)oul, in forms of a short flm, is a dialogue between “absences” of bodies and architecture. It is presented as a triptych along three parallel timelines divided into fve tableaux. Narrating the aftermaths of death, it refects the bereaved, the deceased, and the workers’ perspective along three mandatory days of grieving. Absence in this paradigm is not solely physical or emotional but rather phenomenological— what appears a quotidian existence of oneself is stripped of its corpse, reafrming that the inherent genius loci of the crematorium instead refect a broader infuence that institutions have experienced since post-war Korea. It argues that the systematized practice of death processing is an apparatus used to sever the genealogy of individual bodies from their role in afrming personal and communal kinships. Embedded within its architectural design, this alienation dismantles time by shifting the condition of death processes as an engineered state, rather than historical or material one. This detachment is emblematic of the country’s postwar trajectory, where rapid modernization prioritized efciency over continuity, severing longstanding rituals that once bond personal grief to communal memory. The friction between an engineered present and an inherited past manifest as a form of cultural desynchronization— one where the ostensibly modern remains haunted by the traditional. This shift extends beyond mere technical or practical concerns; it represents a deliberate method of assimilating a nonlinear societal modernization—one that in its pursuit of progress, distances itself from historical trauma. Yet this tension does not merely mark a transition; it accumulates as a generational melancholy, where the urgency of progress leaves grief suspended in an unresolved state, neither fully severed nor meaningfully preserved.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162072</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Risks in Voluntary Forest Carbon Offsets Using Open Data: A Hybrid Framework Integrating Retrieval-Augmented Generation in LLMs and Geospatial Analytics</title>
<link>https://hdl.handle.net/1721.1/162071</link>
<description>Analyzing Risks in Voluntary Forest Carbon Offsets Using Open Data: A Hybrid Framework Integrating Retrieval-Augmented Generation in LLMs and Geospatial Analytics
Xu, Ziqing (Becky)
The credibility of voluntary carbon markets hinges on the quality of carbon offset projects, particularly in forestry and land-use sectors where claims of additionality and emissions reductions are often disputed. This paper introduces a novel, open-source approach to evaluating carbon offset projects by integrating open datasets, satellite-based remote sensing, and large language models (LLMs). Focusing on additionality and baseline integrity, the study examines existing challenges—including inflated baselines, inconsistent standards, leakage risks, and limited transparency—and proposes a system to automate early-stage project assessment. The platform combines AI-driven document analysis and geospatial data processing to evaluate risk factors such as additionality, leakage, and policy compliance, offering stakeholders an accessible, scalable tool to identify high-integrity carbon credits and mitigate greenwashing. This work aims to enhance transparency, accountability, and trust in the voluntary carbon market through data-driven, user-friendly decision support.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162071</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Environmental and Supply Chain Topics in&#13;
Finance</title>
<link>https://hdl.handle.net/1721.1/162070</link>
<description>Essays in Environmental and Supply Chain Topics in&#13;
Finance
Zhang, Henry H.
This thesis comprises three essays in finance. The first two essays study how liquidity provision by the financial sector affects firms’ production decisions in response to shocks. The third essay studies the real and financial impacts of regulatory enforcement in an environmental setting.&#13;
&#13;
The first chapter (joint with Victor Orestes and Thiago Christiano Silva) shows that firms experience large increases in sales and purchases after receiving cheaper liquidity. We focus on factoring, defined as the supplier-initiated sale of receivables. In Brazil, receivables funds (FIDCs) securitize receivables for institutional investors. By assembling a novel transaction-level dataset of factoring with other credit operations for all registered firms and FIDCs, we construct a shift-share instrument for factoring financing supply based on FIDC flows. We then use a novel combination of electronic payments, trade credit, and employer-employee matched data to estimate the impacts. A flow-induced increase in receivables demand reduces firms’ factoring interest rate. In response, firms demand more permanent labor and less temporary labor. In our model, these effects arise from factoring’s purpose of reducing cash inflow volatility, helping firms match inflows to outflows, which firms otherwise achieve at an efficiency cost through substitution across labor types..&#13;
&#13;
The second chapter (joint with Victor Orestes and Thiago Christiano Silva) uses transaction-level data on payments, credit, and insurance to examine how Brazilian farmers responded to the severe frost of July 2021, a shock that affected coffee, a perennial crop whose plants are a major component of farm value. The frost shock reduced both output and the pledgeable value of farmers’ collateral. We find that insured farmers increased investment in the years following the shock, while uninsured farmers reduced investment and borrowing. We show how this pattern is consistent with models of imperfect pledgeability of a firm’s collateral, where constrained firms neither insure (ex-ante) nor fully recover from a shock (ex-post). Limited commitment endogenously generates under-insurance through the combination of upfront payment of the insurance premium with the tightening of borrowing constraints post-shock due to the decrease in total collateral. We discuss two equilibrium implications of this mechanism: the inefficacy of emergency credit lines in targeting liquidity constrained firms and the amplification of output volatility from the rising risk of extreme weather shocks.&#13;
&#13;
The third chapter (joint with Ananya Kotia and Utkarsh Saxena) studies the aggregate impacts of court-ordered iron ore mining bans in India. The local sectoral ban is a command-and-control (CAC) policy that is commonly applied to natural resource settings, usually when the regulator has a signal of widespread non-compliance. The Supreme Court of India imposed bans on iron ore mining and outbound iron ore trade in two states in response to reports that mines operated under fake environmental permits and underpaid mining royalties. Using firm-level industrial survey data, mine-level output data, we decompose the bans’ effects into trade, production networks, and local labor demand channels. Our results indicate persistent declines in employment, capital stock, and borrowing by iron-consuming plants, despite the temporary duration of the ban. These findings highlight the economic spillovers caused by CAC policies, especially in industries that are upstream in the supply chain.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162070</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flexibility in Platform Operations</title>
<link>https://hdl.handle.net/1721.1/162068</link>
<description>Flexibility in Platform Operations
Zhao, Jiayu
This thesis studies how modern service platforms, through algorithmic and market design, leverage agents’ flexibility to enhance operational efficiency. The last decade has witnessed the booming growth of such platforms in ride-hailing (e.g., Uber), e-commerce (e.g., Amazon), and hospitality (e.g., OpenTable). A central operational challenge in these systems lies in the heterogeneity across both supply and demand. For example, Uber cannot match a rider and a driver who are far apart in time or location. To address such challenges, platforms increasingly rely on flexibility levers—interventions that encourage market participants to be more accommodating in when or how they interact with the platform. For instance, Uber’s ``wait and save'' option offers a discount to riders who are willing to wait longer, making it easier to find compatible matches. Motivated by the growing use of such flexibility incentives, this thesis examines how flexibility can be structured, coordinated, and optimized in modern platforms. It focuses on two central dimensions of flexibility: (1) how flexibility levers interact across a platform’s ecosystem and (2) how flexibility decisions can be optimized to improve operational performance.&#13;
&#13;
Part I of this thesis examines the interactions and implications of platforms' flexibility decisions. Decisions around flexibility on platforms influence both (i) horizontal dynamics across market sides and (ii) vertical dynamics in a supply chain. Chapter 2 investigates the horizontal interaction between demand-side and supply-side flexibility incentives. While such incentives are common on both the demand (e.g., "wait and save" feature at Uber) and the supply side (Ride streak bonuses at Uber) of platforms, they have been treated in isolation in the literature and in practice. Chapter 2 initiates the study of two-sided flexibility in platforms: by modeling how these incentives influence the likelihood of compatibility between agents and the resulting matching size, we study whether and when platforms should invest in flexibility across both market sides. Moreover, we identify that platforms may realize significant efficiency gains by incorporating the horizontal interplay of flexibility when designing different incentives. &#13;
&#13;
In an orthogonal direction, Chapter 3 investigates the vertical supply chain implications of ride-hailing platforms' flexibility decisions. When dual-sourcing autonomous vehicles (AVs) and flexible human drivers with self-scheduling capacity, platforms (e.g., Uber's operations in Phoenix and Austin) make dispatch prioritization decisions to fulfill demand through a hybrid fleet. These decisions affect the incentives of AV suppliers and human drivers, and the self-scheduling nature of gig workers introduces novel supply chain challenges. We study how these challenges can hinder successful AV deployments and provide contracting solutions to overcome them.&#13;
&#13;
Part II of the thesis focuses on optimizing specific operational levers for flexibility. The digitization of modern platforms allows for algorithms that provide better customization and timing to harness flexibility. For instance, booking platforms can adjust their admission control decisions in real-time by considering customers' heterogeneous probabilities of being no-shows (i.e., not requiring service) and their compensation requirements for overbooking. In Chapter 4 we analyze an online resource allocation problem that allows overbooking and propose a policy that improves the additive profit loss guarantee (compared to a clairvoyant) in T periods from an order of square-root-T in the literature to a bounded constant. &#13;
&#13;
A related application appears in e-commerce, where retailers seek to use promotional discounts to align customer demand with their inventory position. Chapter 4 investigates how platforms can leverage an "opaque selling" strategy to dynamically time these discounts to influence purchase behavior and balance inventory. We propose a class of dynamic inventory-balancing algorithms that adapt opaque selling to real-time inventory states, achieving order-optimal fulfillment costs. This chapter demonstrates how demand-side flexibility can be operationalized through pricing levers for better inventory management.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162068</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reduction of Radiation Produced in Ion Implantation Devices, and Measurement of Some Relevant Cross-Sections</title>
<link>https://hdl.handle.net/1721.1/162067</link>
<description>Reduction of Radiation Produced in Ion Implantation Devices, and Measurement of Some Relevant Cross-Sections
Zangi, Arthur S.
Ion implantation devices, machines which can very precisely dope semiconductors using beams of accelerated charged particles, have in recent years begun to be used in implanting high energy light ions, with energies greater than 1 MeV. This has caused unprecedented production of neutron and gamma radiation, particularly of neutrons from the ¹³C(alpha,n)¹⁶O reaction, creating an unacceptable radiation hazard. To address this issue, we undertake dose mapping and modeling efforts to create simulation tools in Geant4 which can accurately predict dose rates on the Axcelis VXE LT. &#13;
&#13;
Existing physics tools for modeling nuclear reactions have been shown to produce non-physical results at incident particle energies of 1 - 2 MeV, as these tools are frequently used for modeling reactions which may have energies into the GeV or even TeV range. To address these deficiencies, we construct a new drop-in physics model which uses relativistic kinematic equations to precisely predict the energy and angular distributions of secondary particles produced in Geant4 at low energies. This model relies on accurate cross-section data to describe the reaction; to address gaps in the literature on the two neutron producing reactions of interest to this work, we measure the angular dependent cross-section of the ¹³C(alpha,n)¹⁶O reaction over 7 angles, at the 2.605 and 2.670 MeV resonances, and we measure the total cross-section of the ²⁹Si(alpha,n)³²S reaction at 2.6 and 2.7 MeV.&#13;
&#13;
By implementing the new physics model and adding new cross-section data to the model of the ion implantation device, we are able to produce a high-fidelity simulation of radiation production and transport in ion implantation devices. Using this tool, we then propose solutions to mitigate radiation production within the ion implanter, reducing the radiation hazards of high energy ion implantation devices.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162067</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiation Effects on Thermal Properties of Advanced Nuclear Materials</title>
<link>https://hdl.handle.net/1721.1/162066</link>
<description>Radiation Effects on Thermal Properties of Advanced Nuclear Materials
Johnston, Maren
Understanding the effects of irradiation on critical thermophysical properties is fundamental for the advancement of next-generation nuclear systems operating in high-flux neutron and gamma environments. Zirconium hydride (ZrH) and yttrium hydride (YH) have emerged as promising neutron moderating materials due to their exceptional hydrogen density leading to superior moderating power. Yet, the radiation-induced microstructural evolution and its correlation to macroscopic thermal transport phenomena remain insufficiently characterized.&#13;
&#13;
In this work, ZrH and YH specimens were characterized pre- and post-irradiation via laser flash analysis, high-resolution dilatometry, and differential scanning calorimetry. Comparative analysis revealed that even low-fluence neutron irradiation induced complex defect clusters that degraded thermal diffusivity, while the crystallographic lattice parameters, vibrational energy states (inferred from thermal expansion measurements), and heat capacity exhibited an inconclusive response to radiation damage.&#13;
&#13;
To address limitations in current characterization methods for large-scale, anisotropic composite nuclear materials, we developed an advanced thermal transport measurement facility using infrared photothermal excitation. This platform enables spatially-resolved thermal diffusivity mapping of silicon carbide (SiC) composites—materials with complex three-dimensional fiber arrangements being evaluated for accident-tolerant fuel cladding applications. Complementary Thermal Conductivity Microscopy (TCM) measurements conducted at Idaho National Laboratory provided microscale resolution of constituent thermal properties, establishing a multi-scale characterization approach that bridges microscopic thermal transport mechanisms with bulk composite performance. These findings advance the qualification of advanced nuclear materials, enabling more accurate thermomechanical modeling and performance prediction under the extreme conditions of next-generation reactors.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162066</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bragg Coherent Diffraction Imaging of Metal Microcrystals Using a Multipurpose In Situ Cell Design</title>
<link>https://hdl.handle.net/1721.1/162065</link>
<description>Bragg Coherent Diffraction Imaging of Metal Microcrystals Using a Multipurpose In Situ Cell Design
Hultquist, Riley J.
Structural materials are a key limiting factor in the safety, longevity, and efficiency of nuclear power plants. Advanced metal alloys show great promise for use in reactor environments, but ensuring their reliability requires a fundamental understanding of their microstructural evolution under extreme conditions. In situ X-ray experiments offer a powerful means to investigate nanoscale defect evolution under reactor-relevant conditions. Bragg coherent diffraction imaging (BCDI), a synchrotron X-ray technique, enables high-resolution 3D imaging of degradation processes. Combined with an experimental electrochemical cell, BCDI is a promising tool for providing insight into the problems facing advanced materials in next-generation reactor designs. In this work, a custom designed electrochemical cell, successfully adapted for use at four beamlines, was developed and used to demonstrate in situ corrosion and hydrogen embrittlement (HE) of nickel (Ni) and copper (Cu) microcrystals. HE experiments confirmed the hydrogen evolution reaction (HER) at Cu surfaces and bulk embrittlement, using a removable silver/silver chloride (Ag/AgCl) electrode to maintain a stable reference potential. The cell’s chemical durability was demonstrated during more than 30 hours of operation, wherein Ni microcrystals were subjected to boric acid (B(OH)3) and lithium hydroxide (LiOH) to simulate the corrosive coolant chemistry of pressurized water reactors (PWRs). BCDI revealed the evolution of phase and dislocations in a Ni microcrystal under these conditions, affirming its power as a nanoscale measurement tool. Furthermore, BCDI provided direct evidence of lattice expansion in Cu in response to cathodic reduction of hydrogen. Additional analysis reveals a selective beam relaxation effect on Ni microcrystals, providing further insight into radiation-material interactions. The findings of this work lay important groundwork for future advanced alloy development utilizing user-friendly in situ experimental cells.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162065</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Car-Free Living: Shared Micromobility and Public Transit Interactions in Chicago</title>
<link>https://hdl.handle.net/1721.1/162064</link>
<description>Enabling Car-Free Living: Shared Micromobility and Public Transit Interactions in Chicago
Joyce-Johnson, Seamus C.
Shared micromobility/bikeshare services and public transit both offer travel alternatives to the automobile in urban areas. While these services might be viewed as competitors in the urban mobility space, this thesis argues that each benefits from the other as part of a “package of options” available to the car-free or car-lite urban resident that together provide a comprehensive replacement for auto-mobility. This work centers on the Chicago mobility context. It compares shared micromobility systems in Chicago, Los Angeles, Austin, Pittsburgh, and Washington, D.C., each of which have varying levels of transit integration, ridership, ownership models, and fares. It finds that transit agency ownership of shared micromobility systems appears not to be a panacea and that truly integrated fares are not present even in agency-owned systems. It also finds that lower fares are present in systems with greater levels of public subsidy, regardless of the ownership model. The second part of the thesis characterizes the specific interactions between Divvy, Chicago’s main scooter- and bikeshare system, and the Chicago Transit Authority (CTA). It tests the suitability of novel data sources, including CCTV footage and CTA farecard transactions, for inferring transfers between the two systems and finds that existing spatiotemporal inference methods do not capture the wide heterogeneity in transfer rates among rail stations. Although Divvy has stations near most CTA rail stations, there is room for improvement in the rapidity of these transfers. Using GIS and open-source routing tools, the thesis finds an average walk time of 2.1 minutes from CTA entrances to the nearest Divvy station and suggests high-priority relocations. The third part of the thesis presents preliminary results from a survey of Chicago-area residents probing their attitudes and behaviors regarding shared micromobility and public transit. The survey results showed some evidence of complementary use between the two modes. The thesis concludes with a set of recommendations for the CTA regarding improvements in its integration with Divvy.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162064</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Environmental Regulation</title>
<link>https://hdl.handle.net/1721.1/162063</link>
<description>Essays on Environmental Regulation
Aspelund, Karl Milutin
I focus on three challenges that often confront regulators in designing environmental regulations around the world: equity-efficiency tradeoffs, incomplete information, and significant ecologic and economic uncertainty across time and space. First, I analyze the efficiency and distributional consequences of trade restrictions in environmental permit markets, I study common trade restrictions—segmentation and production requirements—in Iceland’s fisheries permit market, showing how they increase employment and compress the income distribution at an efficiency cost. Second, in the U.S. Conservation Reserve Program, Anna Russo and I find that auction mechanisms designed to incentivize land conservation suffer from widespread non-additionality due to adverse selection in land use. A redesigned scoring system that accounts for counterfactual land outcomes improves welfare. Third, using a stylized model calibrated to the Atlantic scallop fishery, Aaron Berman and I evaluate the use of output, input, and quantity-based regulations over time when a resource exhibits ecological and economic uncertainty across large areas of space. We show that, for a given ultimate sustainability goal, output taxes maximize value but exacerbate inequality and ecological risk, while input limits can strike a balance between flexibility, equity, and robustness.&#13;
JEL Classification: L51, Q22, Q28
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162063</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Subaltern Spaces in the Ancient City: Cultural Identity, Spatial Memory, and Networks of Meaning in Roman Pompeii</title>
<link>https://hdl.handle.net/1721.1/162062</link>
<description>Subaltern Spaces in the Ancient City: Cultural Identity, Spatial Memory, and Networks of Meaning in Roman Pompeii
Dufour, Curtis
This thesis is about subaltern spaces and identities in the Roman colony of Pompeii—an ancient city notably destroyed and preserved by the eruption of Vesuvius in 79 CE; one that has been widely studied for its preservation of a Roman urban environment that was ‘frozen in time’. The excellent preservation of the site reveals a colonial material record that has long encouraged terminal narratives of Roman acculturation, so-called Romanization, which have devalued the plurality of identities and meanings found in the dispersed spaces and imageries of the ancient city. Rejecting this unilinear narrative of colonization, this thesis instead examines the networks of meaning tied to subaltern spaces, architectures, and imageries of Pompeii under Roman colonial rule. &#13;
&#13;
In doing so, this thesis adopts a middle-range approach to the study of Pompeii’s spaces—giving attention to the distinct elements of the material record while acknowledging their interrelations that form networks of meaning stretching across time, space, and culture. These networks shaped and collated the distinctive spatial and imagistic elements constructed in the city under Roman rule—creating cohesive and legible spaces that recursively engaged with the diverse population of the city. Engaging in a ‘peopling’ of the past—that is, reimagining the lived experiences of subaltern Pompeian residents within the ancient colonial city—this thesis explores how networks of meaning led to the persistence, subsidence, and emergence of subaltern identity spaces within the ancient colonial city—spaces that were erased, appropriated, and peripheralized under Roman colonial rule. &#13;
&#13;
Through a detailed analysis of the networked spaces in the city—employing methodological frameworks from urban planning, social geography, and urban ethnography—this thesis tracks the presence of the proposed networks of meaning attached to subaltern spaces within the spatial and imagistic environment of the Colonia Cornelia Veneria Pompeianorum. In doing so, this thesis finds that the plurality of identity spaces in Pompeii cannot be understood through top-down, unilinear narratives of domination and erasure; rather, they must be apprehended as dynamic social and spatial features wherein subaltern Pompeian identities persisted within the very frameworks intended to marginalize them—producing hybridized spaces, syncretized architectural forms, and alternative discourses of place defined by the networked meanings that made the city legible to the diverse individuals who inhabited it.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162062</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Parking to Parcels: The Potential for Microhubs in New York City’s Parking Garages</title>
<link>https://hdl.handle.net/1721.1/162061</link>
<description>From Parking to Parcels: The Potential for Microhubs in New York City’s Parking Garages
Fabris-Green, Sarafina
This thesis employs a site planning and policy perspective to explore how parking garages can serve as last-mile microhubs for e-commerce package deliveries in New York City. During the COVID-19 pandemic, deliveries accelerated, prompting a proliferation of “last-mile facilities,” the destination where parcels go just prior to final delivery. This surge of activity has prompted residents to raise complaints about trucks and vans driving through their neighborhoods and blocking streets or sidewalks when unloading their goods. In response, New York City government has been forced to think more proactively about the freight supply chain and its impact on the urban environment. New York and other cities have begun experimenting with the use of microhubs. Microhubs are small spaces in which packages are unloaded from vans and trucks onto smaller, more sustainable modes such as cargo bikes and handcarts. A commonly identified but understudied location for microhubs is the parking garage. London stands out as a city with this form of hub. This thesis employs three primary research methods—site observations, interviews, and case studies—to argue that parking garages could provide a solution to better utilize dense urban space in dense cities and improve quality of life for residents by reducing the negative impacts of existing last-mile warehouses and delivery vehicles, all while requiring minimal funding. This is shown through an analysis of existing microhub sites in London and how they relate to their urban surroundings. These findings are then applied to two distinct contexts and garage designs in New York City. Finally, the thesis offers site planning criteria that connect land use policy to the design of the facilities and the surrounding public realm through the concept of “planning at the interface.”
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162061</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Financing Inclusive Resilience: Beyond the Economics of Infrastructure in Accra, Ghana</title>
<link>https://hdl.handle.net/1721.1/162060</link>
<description>Financing Inclusive Resilience: Beyond the Economics of Infrastructure in Accra, Ghana
Goyal, Shubhi
Global infrastructure losses from disasters now exceed an estimated US$700–845 billion annually, disproportionality affecting cities in the Global South (CDRI, 2023). Accra, as a rapidly urbanizing coastal city, faces recurring floods, coastal erosion, and rising vulnerabilities that erode development gains and entrench existing socio-economic inequalities. Climate-related disasters alone cost the city US$118 million in annual losses (CDRI, 2023), disproportionately affecting informal settlements. Infrastructure financing remains underfunded: the city needs US$37.9 billion annually to meet infrastructure needs by 2047 (GNIP, 2018), while a US$900 million gap undermines its Climate Action Plan (AMA, 2025). &#13;
&#13;
Despite increased national investment and brewing/blooming/?? global climate finance mechanisms, Accra struggles to attract and equitably deploy resources for inclusive resilience (CPI, 2023).  Projects like the Greater Accra Resilient and Integrated Development (GARID) project expose systemic issues – prioritizing asset protection over community-centered design, with inadequate participation and social co-ownership (GARID PAD, 2019).&#13;
&#13;
This thesis critically examines how infrastructure financing mechanisms in Accra shape the potential to build inclusive resilience. Mapping the city’s financing landscape, it analyzes how institutional, financial, and governance arrangements influence the selection, distribution, and implementation of investments. Using GARID as a case study, the thesis applies a critical justice framework – drawing on distributive justice (who benefits and who bears the costs), procedural justice (who has voice and decision-making power), and epistemic justice (whose knowledge systems are valued in infrastructure planning) (Carolini, 2022) – to evaluate current infrastructure financing practices and explore opportunities to embed these justices in efforts to build resilience. Findings reveal that infrastructure financing decisions are dominated by centralized donor-driven and ministerial priorities, constrained by fiscal austerity, and evaluated through technocratic frameworks that marginalize community participation and local knowledge. &#13;
&#13;
Ultimately, the thesis argues that building inclusive resilience in climate-vulnerable cities like Accra requires transforming infrastructure financing systems to prioritize social inclusion, participatory governance, and knowledge pluralism – alongside, not subordinate to, economic efficiency.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162060</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Breaking the Loop: Climate-Driven Urbanism for America's Climate Migration Hubs</title>
<link>https://hdl.handle.net/1721.1/162059</link>
<description>Breaking the Loop: Climate-Driven Urbanism for America's Climate Migration Hubs
Wagner, Cale
As sea level rise and other climate impacts force millions across the U.S. to increasingly relocate in coming decades, how receiving cities accommodate this growth will significantly impact future emissions trajectories. This thesis examines the climate migration feedback loop, where climate migrants relocate to urban areas with carbon-intensive development patterns, inadvertently accelerating the climate change driving their displacement.&#13;
&#13;
Through analysis of three contrasting metropolitan areas—Atlanta, Portland, and Buffalo—this research demonstrates how different development approaches could either perpetuate or disrupt this feedback loop. Using a spatial methodology based on the urban transect model, the study compares Business-as-Usual scenarios that follow current development trends with Climate-Driven Reform scenarios that redirect growth toward transit-accessible, walkable locations.&#13;
&#13;
The research reveals that Climate-Driven Urbanism can meaningfully reduce both land consumption and emissions compared to conventional development patterns. These reductions stem not from technological advancement or behavioral change, but from strategic spatial reorganization of the same migrating population, with each metropolitan area demonstrating unique implementation pathways. By connecting regional migration flows to metropolitan development scenarios and neighborhood design interventions, this thesis offers planners, designers, and communities a framework for evaluating alternative futures that transform population growth from a spatial challenge and emissions liability into a catalyst for sustainable urbanism.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162059</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reimagining the Role of City Owned Assets as Multifunctional Infrastructure: Serving Community Needs Through Collaboration</title>
<link>https://hdl.handle.net/1721.1/162058</link>
<description>Reimagining the Role of City Owned Assets as Multifunctional Infrastructure: Serving Community Needs Through Collaboration
Smith, Alessandra
This thesis investigates how city governments can reconceptualize infrastructure to reshape value creation for communities, using the City of Atlanta as a case study. By examining various departments and executive offices within Atlanta’s municipal structure, the research highlights the complexities of urban governance, where value is not uniformly defined or understood even within a single city. The central question guiding this work is: How can Atlanta’s city agencies collaborate across departments to identify opportunities to create more value through city-owned assets?&#13;
&#13;
Through stakeholder interviews and a mapping of publicly owned assets, this thesis explores an alternative, strategic approach to infrastructure one that supports not only urban planners but also city practitioners seeking to enhance residents’ quality of life through a value-based lens. The study also acknowledges the often overlooked, expanded value of built assets, which remains difficult to capture through conventional metrics. In doing so, it argues for a broader, more inclusive understanding of infrastructure’s role in urban life.&#13;
&#13;
This research offers a framework to view and explore infrastructure and values in a more comprehensive and holistic way compared to traditional methods. The framework centers strategy around prioritizing infrastructure planning, its relative outcomes, the spatial relationships and function of infrastructure, and the relationships that influence how people interact with infrastructure from a value-based lens.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162058</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>“Whose Bronx?” Regime Politics and the Evolution of Community Power at the Kingsbridge Armory</title>
<link>https://hdl.handle.net/1721.1/162057</link>
<description>“Whose Bronx?” Regime Politics and the Evolution of Community Power at the Kingsbridge Armory
Phillips, Natalie
This thesis traces the 30-year history of redevelopment activities at the Kingsbridge Armory in the Northwest Bronx, as community groups have mounted an expanding challenge to development-as-usual in New York City. Using urban regime theory as a lens, I deploy archival research and interviews to assess the tensions that emerge when regime politics collide with a building movement of community power at the Kingsbridge Armory over time. I argue that New York City’s predominant urban economic development regime is not structured to accommodate an organization that is both a grassroots leader and a developer, and that as community power continues to evolve, the regime’s traditional arrangements become increasingly untenable. I ultimately assert that the increasingly structural movement of community power at the Kingsbridge Armory requires a reimagining of the informal processes, logics, and roles that have defined New York economic development.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162057</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>China Dispossession Watch: Making Visible the Human Costs of Forced Land Expropriation in Urbanizing China</title>
<link>https://hdl.handle.net/1721.1/162056</link>
<description>China Dispossession Watch: Making Visible the Human Costs of Forced Land Expropriation in Urbanizing China
Wu, Franny Xi
This thesis critically examines China's land expropriation regime through a mixed-methods approach that integrates ethnographic investigation, quantitative economic analysis, and practical interventions developed in collaboration with affected communities. Drawing on extensive fieldwork in the Yangtze Delta Region, including 50 in-depth interviews with dispossessed residents, the research documents how China's urbanization strategy systematically captures land value through a dispossession machinery operating at the intersection of state power, market mechanisms, and contested citizenship. The ethnographies reveal a sophisticated system of dispossession enabled by a network of actors whose complementary roles maintain procedural appearances while facilitating extralegal tactics. Quantitative analysis demonstrates systemic under-compensation and value capture that leaves dispossessed households with livelihood disruption and housing insecurity. The research examines how affected communities navigate severe constraints through adaptive resistance strategies to overcome power asymmetries and institutional manipulation, and documents their economic, social, and health outcomes. Moving beyond analysis to practice, the thesis introduces two pragmatic interventions developed through collaborative design with affected communities: a digital humanities platform hosting multimedia ethnographic archives and a quantitative data dashboard; and an anti-displacement handbook which operationalizes research findings into actionable guidance calibrated to the specific challenges identified by community partners. These practical outputs, established as the China Dispossession Watch social venture, reflect a theory of change focused on addressing information asymmetries while building horizontal knowledge networks and long-term movement capacity.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162056</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of Group Decision-Making</title>
<link>https://hdl.handle.net/1721.1/162055</link>
<description>Dynamics of Group Decision-Making
Orzach, Roi
This thesis comprises three chapters, all focused on Microeconomic Theory, specifically the dynamics of decision-making. The first explores how the desire for conformity results in long-run misperceptions due to uninformative decision-making. The second chapter studies multi-project collaborative experimentation. The final chapter analyzes whether decentralized organizations should utilize sequential or concurrent decision-making. The first chapter notes that in many settings, individuals imitate their peers’ public decisions for one or both of two reasons: to adapt to a common fundamental state, and to conform to their peers’ preferences. In this model, the fundamental state and peers’ preferences are unknown, and the players learn these random variables by observing others’ decisions. With each additional decision, the public beliefs about these unknowns become more precise. This increased precision endogenously increases the desire to conform and can result in decisions that are uninformative about a player’s preferences or perceptions of the fundamental state. When this occurs, social learning about peers’ preferences and fundamentals ceases prematurely, resulting in inefficient decisions. In line with findings from social psychology, I show that interventions aimed at correcting misperceptions of peers’ preferences may lead to more efficient decision-making in settings where interventions aimed at correcting misperceptions of the fundamental state may have no effect. The second chapter (joint with Charles Angelucci) analyzes collaborative experimentation across multiple independent domains. Each domain contains infinitely many potential projects with asymmetric benefits. In each period and domain, two players can idle, jointly explore a new project, or jointly exploit a known one, with voluntary transfers. For intermediate discount factors, treating domains as independent during experimentation is suboptimal. The optimal experimentation policy exhibits common features of collaborative experimentation: lengthy exploration, temporary project exploitation, recall of past projects, and inefficient initial or terminal idling within certain domains. We connect these findings to research on buyer-supplier dynamics and persistent productivity differences. The final chapter examines how the timing of decision-making shapes the allocation of decision rights within an organization. Here, I analyze concurrent versus sequential decision-making in a model where two units first communicate and then make decisions, attempting to both adapt to their local conditions and coordinate with their partner. Sequential decision-making improves overall information sharing compared to concurrent decisionmaking. However, first movers also have an incentive to over-adapt to their state, knowing second movers will conform to their decision. A surplus-maximizing headquarters prefers sequential decision-making to concurrent if and only if (i) the two units’ local conditions have sufficiently different volatilities and (ii) their need to coordinate is sufficiently asymmetric or low. Finally, sequential decision-making is shown to be optimal even when allowing for additional governance structures involving the reallocation of decision rights across the units and the headquarters and is shown to render some commonly-analyzed forms of decentralization sub-optimal. JEL Classification Numbers: C72, D83, D90
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162055</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Storage and capacity rights markets in the natural gas industry</title>
<link>https://hdl.handle.net/1721.1/161778</link>
<description>Storage and capacity rights markets in the natural gas industry
Paz-Galindo, Luis A.
            (Luis Andrés)
Thesis: Ph. D., Massachusetts Institute of Technology, Technology, Management, and Policy Program, 1999; Includes bibliographical references (p. 165-169).
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161778</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adsorption physics of metals partially coated by metallic films</title>
<link>https://hdl.handle.net/1721.1/161777</link>
<description>Adsorption physics of metals partially coated by metallic films
Levine, Jules David.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1963; Vita.; Includes bibliographical references (leaves 123-129).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161777</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluorocarbon synthesis in a high-intensity carbon arc</title>
<link>https://hdl.handle.net/1721.1/161776</link>
<description>Fluorocarbon synthesis in a high-intensity carbon arc
Bronfin, Barry Robert.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1963; Includes bibliographical references (leaves 95-103).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161776</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine verification of mathematical proof</title>
<link>https://hdl.handle.net/1721.1/161775</link>
<description>Machine verification of mathematical proof
Abraham, Paul W.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Mathematics, 1963; Vita.; Includes bibliographical references (leaves 208-210).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161775</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An evaluation of the main harbors of Puerto Rico as to their potential for the location of port industries, with special reference to Jobos Harbor</title>
<link>https://hdl.handle.net/1721.1/161774</link>
<description>An evaluation of the main harbors of Puerto Rico as to their potential for the location of port industries, with special reference to Jobos Harbor
Martinez-Sandin, Owen.
Thesis: M.C.P., Massachusetts Institute of Technology, Department of City and Regional Planning, 1960; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161774</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical electron work functions of film coated metals</title>
<link>https://hdl.handle.net/1721.1/161773</link>
<description>Theoretical electron work functions of film coated metals
Levine, Jules David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1961; Includes bibliographical references (leaves 47-48).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161773</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heat transfer from immersion heaters to boiling liquids</title>
<link>https://hdl.handle.net/1721.1/161772</link>
<description>Heat transfer from immersion heaters to boiling liquids
Simpson, H. C.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1951; Includes bibliographical references (leaves 161-163).
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161772</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contract design of a fleet replenishment ship</title>
<link>https://hdl.handle.net/1721.1/161771</link>
<description>Contract design of a fleet replenishment ship
Morcillo Dosman, Alfonso M.
Thesis: B.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1955; Bibliography: leaf 97.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161771</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Jurassic park--the thesis : simulation of dinosaur social behavior using behavior networks</title>
<link>https://hdl.handle.net/1721.1/161770</link>
<description>Jurassic park--the thesis : simulation of dinosaur social behavior using behavior networks
LeCompte, David W.
            (David William)
Thesis: B.S., Massachusetts Institute of Technology, Department of Mathematics, 1993; Includes bibliographical references (leaves 38-39).
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161770</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies of an x-ray selected sample of cataclysmic variables</title>
<link>https://hdl.handle.net/1721.1/161769</link>
<description>Studies of an x-ray selected sample of cataclysmic variables
Silber, Andrew D.
            (Andrew David)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1992; Includes bibliographical references (p. 253-254).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161769</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A bid-rent analysis of housing market discrimination.</title>
<link>https://hdl.handle.net/1721.1/161768</link>
<description>A bid-rent analysis of housing market discrimination.
Galster, George Charles.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1974; Vita.; Bibliography: leaves 273-283.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161768</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Students who change majors: a study of adolescent development at MIT.</title>
<link>https://hdl.handle.net/1721.1/161767</link>
<description>Students who change majors: a study of adolescent development at MIT.
Spitzer, Charles Mark.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1968; Bibliography: leaves 77-78.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161767</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An economic analysis of the cobalt industry.</title>
<link>https://hdl.handle.net/1721.1/161766</link>
<description>An economic analysis of the cobalt industry.
Burrows, James Christian.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1970; Vita.; Bibliography: leaves 400-405.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161766</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Private interests and international conflict : a case study of US intervention in the Congo</title>
<link>https://hdl.handle.net/1721.1/161765</link>
<description>Private interests and international conflict : a case study of US intervention in the Congo
Gibbs, David Neil.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1989; Includes bibliographical references (leaves 404-427).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161765</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Yang-Mills on the two-sphere</title>
<link>https://hdl.handle.net/1721.1/161764</link>
<description>Quantum Yang-Mills on the two-sphere
Fine, Dana Stanley.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1989; Includes bibliographical references (p. 40).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161764</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The making of industrial policy : ad hoc corporatism and cable and satellite technology in West Germany</title>
<link>https://hdl.handle.net/1721.1/161763</link>
<description>The making of industrial policy : ad hoc corporatism and cable and satellite technology in West Germany
McKnight, Lee W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1989; Vita. M.I.T. copy lacks leaves 15 and 385.; Includes bibliographical references (leaves 378-403).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161763</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three essays on the theory of contracts</title>
<link>https://hdl.handle.net/1721.1/161762</link>
<description>Three essays on the theory of contracts
Hermalin, Benjamin E.
            (Benjamin Edward)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1988; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161762</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of degradation rate and crosslink density of artificial skin on wound contraction</title>
<link>https://hdl.handle.net/1721.1/161761</link>
<description>Effects of degradation rate and crosslink density of artificial skin on wound contraction
Lee, Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1986; Bibliography: leaves 93-94.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161761</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reflectivity studies of semimetals under pressure.</title>
<link>https://hdl.handle.net/1721.1/161760</link>
<description>Reflectivity studies of semimetals under pressure.
Mendez Perez, Emilio Eugenio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161760</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Exomoon Formation in Circumplanetary Disks&#13;
Using Dustpy</title>
<link>https://hdl.handle.net/1721.1/159960</link>
<description>Modeling Exomoon Formation in Circumplanetary Disks&#13;
Using Dustpy
Noto, Maurielle I.
This study presents one-dimensional simulations of a viscously evolving, gas-starved circumplanetary disk (CPD) modeled around a Jupiter-like planet. The simulations investigate the conditions under which satellites may form, with a particular focus on identifying physical mechanisms that create pressure bumps and dust traps capable of triggering the streaming instability. Multiple simulations were conducted with injected dust particles having maximum sizes of 100 µm, 1 mm, and 1 cm, and with fragmentation velocities set to 100, 500, and 1000 cm/s. Results show that regardless of the initial maximum injection particle size, the CPD consistently evolves toward the same maximum grain size, 0.5 cm, driven by system-wide physical processes such as radial drift, gas-dust coupling, and fragmentation limits. Larger fragmentation velocities enable more rapid and extended particle growth, leading to earlier quasi-steady-state evolution and allowing grains to reach sizes beyond the fragmentation barrier in certain regions. An analysis of dust and gas radial velocity profiles was performed to examine the size-dependent dynamics of particles, offering insight into the evolving coupling between dust and gas across the disk. Although the simulations did not include dust back-reaction—thereby excluding the possibility of observing streaming instability—the framework establishes a baseline for future studies. Enabling back-reaction and incorporating substructures such as radial gaps would help identify localized regions of moon formation. These simulations also pave the way for further investigation into the roles of icelines and volatile transport in setting satellite composition and structure, contributing to a broader understanding of exomoon formation and habitability.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159960</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-Frontal Exchange at the US Northeast Shelfbreak</title>
<link>https://hdl.handle.net/1721.1/159959</link>
<description>Cross-Frontal Exchange at the US Northeast Shelfbreak
Taenzer, Lukas L.
Exchange across the semipermeable US Northeast shelfbreak front is a potential driver of irreversible change to the continental shelf waters, its productive ecosystem, and economically valuable fisheries. However, cross-frontal exchange is difficult to observe directly because it is highly intermittent, non-linear, and driven by both internal frontal instability and external forcing. In this thesis, I quantify eddy-driven exchange across the US Northeast shelfbreak front and its impact on the coastal ocean, starting on seasonal timescales and moving toward individual synoptic events. For this task, I take advantage of unprecedented multi-year observations from the Ocean Observatories Initiative (OOI) Coastal Pioneer Array (2014-2022). On seasonal timescales, the buoyancy-driven shelfbreak front is persistently trapped at the shelfbreak, which supports theoretical predictions of shelfbreak frontogenesis (Chapter 2). However, exchange across the shelfbreak front leads to a significant increase in salinity on the continental shelf between spring and fall. A volume budget of the subsurface continental shelf 'cold pool', habitat of the valuable benthic ecosystem, quantifies the contribution of eddy-driven advection to the observed salinity increase and explains the seasonal cycle of watermass variability on the shelf (Chapter 3). However, the multi-year averaged cold pool watermass budget does not capture the intermittency of cross-shelfbreak eddy-fluxes on synoptic timescales. Thus, I demonstrate how individual mooring timeseries can be used to capture the statistical distribution of eddy-driven exchange by assessing cross-shelfbreak eddy-covariance fluxes of salt and heat (Chapter 4). Mean eddy-covariance fluxes align well with previous residual estimates of cross-shelfbreak exchange to close coastal watermass budgets, and just 10-20% of statistically anomalous events are responsible for half the multi-year mean flux. To characterize rapid changes in continental shelf watermass properties over short timescales, I investigate the decline of seasonal stratification due to individual weather events and identify signatures of cross-shelfbreak exchange in wind-driven destratification (Chapter 5). Altogether, this thesis extends our understanding of the characteristics, timing, and magnitude of eddy-driven exchange across the US Northeast shelfbreak front on varying timescales. This information can help to inform how large-scale, long-term trends will impact the US East Coast coastal ocean and its marine ecosystem.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159959</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-Phase Heat Transfer Effects of Mixing Vane Geometries in a Narrow Rectangular Channel</title>
<link>https://hdl.handle.net/1721.1/159958</link>
<description>Single-Phase Heat Transfer Effects of Mixing Vane Geometries in a Narrow Rectangular Channel
Pisinger, Mateo
Mixing vane geometries enhance the fuel-to-coolant heat transfer within nuclear reactors, which allows for more efficient use of power reactors. At the same time, their presence affects the critical heat flux (CHF), the upper limit to power produced by the reactor, within the reactor. Numerical simulations do not accurately reflect the changes to CHF when mixing vanes are included in nuclear fuel assemblies, suggesting that the CHF models are not resolving the boiling phenomena that occurs with mixing vane geometries. This thesis aims to address this gap by designing an experiment capable of directly resolving the local single- and two-phase heat transfer processes which occur when mixing vane geometries are introduced into flow channels, building on previously developed high spatial- and temporal- resolution optical and infrared imaging techniques. A high-resolution experimental database would allow researchers to understand the boiling physics at the smallest scales, enabling the creation of more advanced numerical tools for the design and safety analysis of nuclear power reactors. Single-phase heat transfer simulations using the commercial computational fluid dynamics code STAR-CCM+ were performed to aid in the design process, and a preliminary analysis of the results was conducted to identify key single-phase heat transfer phenomena. Modifications to an existing experiment were made for the inclusion of flow obstacles analogous to mixing vane geometries into a flow boiling experiment. Obstacle geometries were 3D printed using high-temperature resistant resin, allowing the creation of complex three-dimensional geometries within the experiment. Experimental validation of the simulations is needed, however, the preliminary analysis identified single-phase heat transfer phenomena of interest for further investigation. These include: the relationship between fluid velocity and turbulent kinetic energy with heat transfer; the effects of impinging flows on heat transfer; and the heat transport within and changing geometries.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159958</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-throughput tools for decoding T cell receptor specificity</title>
<link>https://hdl.handle.net/1721.1/159957</link>
<description>High-throughput tools for decoding T cell receptor specificity
Gaglione, Stephanie A.
T cells play a central role in adaptive immunity by recognizing specific antigens through their T cell receptors (TCRs). These receptors bind to peptides presented by major histocompatibility complex (pMHC) proteins, driving immune responses in cancer, infection, and autoimmunity. Understanding how TCRs recognize antigens is crucial for developing cancer immunotherapies and identifying therapeutic targets in autoimmunity, infectious disease, and allergy. However, large-scale mapping of TCR-antigen interactions remains a challenge due to the vast diversity of both TCRs and antigens, as well as the limitations in current screening technologies in cost, throughput, and accessibility.&#13;
This work presents two advances in large-scale TCR-antigen screening. The first aim introduces a scalable and cost-effective platform for synthesizing tens of thousands of TCRs from sequence data to create synthetic TCR libraries. We integrate this approach with a high-throughput antigen discovery platform that leverages pMHC-pseudotyped viruses to identify TCR-pMHC pairs. Using this system, we screen 3,808 vitiligo patient-derived TCRs against 101 antigens, and synthesize 30,810 TCRs from patients with pancreatic ductal adenocarcinoma (PDAC). By streamlining TCR assembly and antigen screening, this pipeline has the potential to advance immunotherapy, accelerate vaccine design, and deepen our understanding of TCR recognition.&#13;
The second aim presents a new method that couples pMHC-displaying virus-like particles with yeast display, enabling efficient screening of millions of TCR variants against ~100 pMHCs at once. Yeast display is a powerful tool for studying TCR-antigen interactions but is constrained by its reliance on recombinant protein production. Our approach overcomes this limitation by replacing recombinant protein with barcoded lentiviral particles, allowing large-scale, multiplexed screening of TCR libraries. By overcoming key technical barriers, these tools significantly expand our ability to study TCR specificity and engineer new antigen-specific therapeutics.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159957</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A relative trace formula approach to the stable trace&#13;
formula on the unitary group</title>
<link>https://hdl.handle.net/1721.1/159956</link>
<description>A relative trace formula approach to the stable trace&#13;
formula on the unitary group
Lu, Weixiao
We develop a relative trace formula on GLₙ which can be compared to the stable trace formula on the unitary group. Locally, we prove the fundamental lemma and transfer. We also proof a character identity based on the transfer. Globally, we develop a (simple) relative trace formula and compared it to the (simple) stable trace formula on the unitary group.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159956</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The influence of topography on ice-ocean interactions in coastal Antarctica</title>
<link>https://hdl.handle.net/1721.1/159955</link>
<description>The influence of topography on ice-ocean interactions in coastal Antarctica
Gaul, Alan
Interactions between various water masses and ice shelves along the Antarctic coastline impact the global climate and sea level. This thesis focuses on how geometric features such as troughs and fast ice affect cross-shelf exchange in dense water formation regions of the Antarctic continental shelf. In Chapter 2, we use an idealized, eddy-resolving model to examine how an outflow of Dense Shelf Water (DSW) drives an inflow of warmer Circumpolar Deep Water (CDW) in a narrow, prograde trough. We find that the trough organizes mesoscale, cyclonic eddies in the dense outflow into a chain pattern. These cyclones then, as an efficient group, entrain filaments of CDW towards the coast. In Chapter 3, we use the same model to investigate buoyancy-driven cross-shelf exchange in a wide, retrograde trough. We find that the dynamics of the CDW intrusion change near the shelf-break. Here, the DSW outflow excites Topographic Vorticity Waves which interact with the DSW outflow to drive onshore intrusions of CDW. Onshore of the shelf-break, CDW intrudes further poleward due to a mean flow driven by eddy rectification. In Chapter 4, we switch to a realistic model of Prydz Bay, East Antarctica, to test the impact of local icebergs on cross-shelf exchange, dense water formation, and ice shelf basal melt rates. We find that removing the Cape Darnley Ice Barrier increases CDW intrusions and decreases dense water formation due to changes to the sea ice cover and wind-driven circulation. Conversely, removing the tabular Iceberg D-15 has little impact on heat transport and only slightly decreases dense water formation. Thus, the location of grounded icebergs greatly influences their impact on regional hydrography and ice shelf melt. In all, this thesis uses numerical models to examine the dynamics of cross-shelf exchange in coastal Antarctica. Understanding these dynamics is imperative for projecting how the Antarctic margins will impact the globe in a changing climate.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159955</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Steel Decarbonization Strategies and Supply Chain Integration</title>
<link>https://hdl.handle.net/1721.1/159954</link>
<description>Analysis of Steel Decarbonization Strategies and Supply Chain Integration
Johnson, Sydney Rose
Industrial decarbonization is an obstacle as the global community focuses on climate mitigation. Steel production is responsible for 7% of global emissions and faces unique challenges in reducing emissions in the ironmaking process. Models are developed to assess the emission and cost characteristics of current and emerging steel decarbonization strategies at a plant and sector level. Case studies are performed in India, the second-largest steel producer, and the United States, the fourth-largest steel producer, to highlight differences in strategies. In addition, a model of hydrogen-based steel production and a corresponding hydrogen network is created to assess supply chain needs. From this analysis, we can identify key cost and logistic barriers to technology implementation and the impact on the steel industry in respective locations. Finally, the future of the steel industry is assessed from a strategic standpoint while considering challenges to commercialization and policy mechanisms.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159954</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pygmalion and Adonis</title>
<link>https://hdl.handle.net/1721.1/159953</link>
<description>Pygmalion and Adonis
Wang, Madison
This work contains parts of a draft for a novel with the working title of Pygmalion and Adonis. It can be split into three sections. The first section is the beginning of the novel where the main character Edvard is introduced. He is rejected from the Academy of Horizons, visits the Arbiter’s temple, and receives an unusual commission. The second section is a little after the first where Edvard finishes the statue and it comes to life. The now alive statue is named Marcy. In the third and last section, Edvard and Marcy visit a bookshop where they discover a unique device.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159953</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aligning Machine Learning and Robust Decision-Making</title>
<link>https://hdl.handle.net/1721.1/159952</link>
<description>Aligning Machine Learning and Robust Decision-Making
Cristian, Rares C.
Machine learning (ML) has become increasingly ubiquitous across many applications worldwide, ranging from areas like supply chain to personalized pricing, recommendations, and more. These predictive models are often used as tools to inform operations and decision-making, with the potential to revolutionize decision-making.  The main key question this thesis aims to address is: How can we make ML methods aware of their downstream impact on the full decision-making process? As a result, we focus on developing methods to align AI with real-world objectives in order to make efficient, safe, and robust systems.&#13;
&#13;
This thesis is split into three chapters focusing on different aspects of this problem. In Chapter I of this thesis we address the heavy computational complexity of existing methods. We present a meta-optimization machine learning framework to learn fast approximations to general convex problems. We further apply this within an end-to-end learning framework which trains ML models with an optimization-based loss function to minimize the decision cost directly. This meta-optimization approach allows us to tackle problem sizes that were intractable using approaches from the previous literature. Furthermore, this work establishes analytically that this learning approach guarantees fast convergence to nearly-optimal solutions. Through this chapter it is shown that the proposed approach consistently scales better in terms of runtime as problem size increases, being 2 to 10 times faster for various problems while retaining nearly the same accuracy. &#13;
&#13;
In Chapter II we focus on the robustness problem, to make decisions that protect against worst-case scenarios as well as to noise in the data. Traditional robust optimization methods tackle this issue by creating uncertainty sets for each observation, aiming to minimize costs in worst-case scenarios. However, these methods assume the worst-case scenario happens at every observation, which can be too pessimistic. We propose a new approach that avoids constructing uncertainty sets and links uncertainties across the entire feature space. This allows for robust decision-making without assuming worst-case scenarios at every observation. Our approach integrates robustness with a concept of learning stability, proving that algorithms with a stability property inherently produce robust solutions without explicitly solving the robust optimization problem. This chapter finally tests the framework on a variety of problems such as portfolio optimization using historical stock data,  inventory allocation and electricity generation using real-world data, showing significant improvement in terms of robustness and competitive results in terms of the average error relative to existing literature.&#13;
&#13;
Finally in Chapter III we consider the endogenous setting where decisions we take affect outcomes, like pricing and assortment optimization where decisions (like price) affect demand. In the end-to-end spirit, this research introduces an approach to jointly predict and optimize in this setting which learns a prediction aligned with expected cost. We further introduce a robust optimization decision-making method that can account for uncertainty in ML models --- specifically by constructing uncertainty sets over the space of ML models and optimize actions to protect against worst-case predictions. We further prove guarantees that our method can capture near-optimal decisions with high probability as a function of data. We also introduce a new class of two-stage stochastic optimization problems to the end-to-end learning framework that can now be addressed through our framework. Here, the first stage is an information-gathering problem to decide which random variable to ``poll'' and gain information about before making a second-stage decision based off of it. We present several computational experiments  for pricing and inventory assortment/recommendation problems. We compare against existing methods in bandits and offline reinforcement learning, showing our approach has consistent improved performance over these.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159952</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing the Compositional and Structural Degeneracy of Planetary Interiors</title>
<link>https://hdl.handle.net/1721.1/159951</link>
<description>Reducing the Compositional and Structural Degeneracy of Planetary Interiors
Lin, Zifan
The interior conditions of planets are highly uncertain, because two types of intrinsic degeneracies – compositional degeneracy and structural degeneracy – prevent precise characterization. In this thesis, I develop a planetary interior code package, CORGI, incorporating state-of-the-art physical properties of planet-forming materials. Using CORGI, I eliminate unmixed interior scenarios for Uranus, rule out the fossil-compressed formation hypothesis for high-density exoplanets, and establish a link between formation history and atmospheric composition for hypothetical Earth-like white dwarf (WD) exoplanets, reducing interior degeneracy for these planets. However, I also identify a novel carbon-rich interior composition for sub-Neptunes, introducing an additional degeneracy to this already ambiguous category.&#13;
&#13;
It is heatedly debated that whether Uranus is a distinct-layer “ice giant” with greater than 70 wt% ice or a “rock giant” with compositional gradients and roughly equal amounts of ice and rock. Gravity field measurements from spacecraft, which directly probe interior mass distribution, are expected to resolve this debate. However, I show that the degeneracy will persist even with future Uranus Orbiter and Probe (UOP) mission, but the level of degeneracy can be reduced. My models indicate that only highly mixed interiors – either those with smooth density gradients or those with substantial light elements in the mantle and heavy elements in the atmosphere – are consistent with previous Voyager 2 measurements. Additionally, I demonstrate that the UOP can distinguish between high- and low-atmospheric metallicity scenarios and constrain the J6 harmonic, and potentially J8, if placed in close-in polar orbits, informing the mission and orbit design of UOP.&#13;
&#13;
For exoplanets with no solar system counterparts, interior models are essential for understanding their composition, structure, formation, and evolution. I apply CORGI to a category of high-density planets that are consistent with greater than 50% core mass fraction, substantially higher than that of the Earth (33%). By combining planetary interior modeling with photoevaporation modeling, I investigate into one of the hypotheses – the fossil-compressed hypothesis – for the origin of high-density planets. My models revealed that most high-density planets are highly improbable to be fossil-compressed cores, because most or even all of the iron-silicate core is molten during the evolution, while the fossil-compressed hypothesis requires a solid core. Kolmogorov–Smirnov test statistics show that this result is robust for planets with both hydrogen-dominated and steam envelopes.&#13;
&#13;
Planetary interior models sometimes reveal new degeneracies rather than resolving them. By combining interior, atmospheric chemistry, and transmission spectra models, I identify a new possible interior composition for sub-Neptunes: carbon-rich composition. I posit that sub-Neptunes formed between the “soot line” – a condensation line for refractory organic carbon – and the water snow line would have high bulk C/O ratios and a substantial carbon layer. Interior models revealed that such carbon-rich compositions are consistent with the masses and radii of sub-Neptunes, given appropriate atmospheric metallicity. Atmospheric chemistry and transmission spectra models found that the spectral features predicted for carbon-rich sub-Neptunes are compatible with observations by the Hubble Space Telescope and the James Webb Space Telescope.&#13;
&#13;
Finally, I explore the connection between post-main-sequence evolutions and the atmosphere and interior conditions of hypothetical Earth-like planets orbiting WDs. I showed that first-generation WD planets that have experienced significant atmospheric loss and second-generation WD planets that are formed in WD debris disks under a more clement radiation environment can be distinguished by the presence of a hydrogen-dominated atmosphere. Additionally, the interior conditions of second-generation WD planets can be inferred from WD pollution observations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159951</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Daikon Summer</title>
<link>https://hdl.handle.net/1721.1/159950</link>
<description>Daikon Summer
Le, Alice Trang
In Vietnamese, there’s a phrase “ôm sầu riêng” which means “holding onto your sadness on your own.” But if you read it literally, it means “holding durian.”&#13;
&#13;
When Juliette, a heartbroken college student, visits locations from her favorite film hoping to find a movie-magical romance, she unknowingly crosses paths with Hana, the film’s jaded screenwriter, who is struggling to come up with ideas for a new script after the end of a long-term relationship. After splitting from their partners, Juliette and Hana emotionally isolate themselves from others to erase the reminder of the connections they’ve lost. Juliette brings a daikon radish (no durians were for sale) around with her for company—rolling it around in a little red wagon—hoping to meet-cute with strangers at each movie location she visits. Meanwhile, Hana trades her normal routine for destructive habits to cope with the new absence surrounding her. However, unbeknownst to Juliette and Hana, while pursuing their own solitudes, they are together in their loneliness. As time passes, marked by missed connections and the daikon radishes that Juliette must replace, and by the new situations Hana gets herself into while searching for inspiration for her next film, the pair find they may have stumbled onto an unexpected path to getting closure that brings them towards one another.&#13;
&#13;
Part romance, part drama, part comedy, Daikon Summer is a coming of age story about learning how to be alone. It’s a film that sits with the unfinished, unresolved, and incomplete. The setting of the film—the fictional city of Berlin, California—feels at the same time as stagnant and painfully mundane as the characters’ internal worlds and as absurdly rose-colored as their dreams. In a summer heavy with a suffocating atmosphere of longing and existential ennui, the characters, at first too scared to be alone they can’t even let go of their sadness, learn to hold onto something else—the security of their own company and their growing conviction that they’ll run into something or someone exciting around the next corner. A story about finding one’s hope and romanticism again, Daikon Summer is both a love letter to cinema, to the tender ways stories can connect us to one another, and a meditation on the self in love.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159950</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Simple Models and Observations of the Flexure Zone around Antarctica</title>
<link>https://hdl.handle.net/1721.1/159949</link>
<description>Bridging Simple Models and Observations of the Flexure Zone around Antarctica
Cao, Rina
This study evaluates whether the length of ice flexure zones in Antarctica can be used to infer the thickness of the ice and the effective Young’s modulus using a 1-D linear elastic beam bending model. The ice flexure zone is defined as the transition region between the grounded ice sheet and the free floating ice shelves, where the ice flexes due to the rise and fall of ocean tides. Surface elevation data from ICESat-2 were analyzed at several sites on the Ross Ice Shelf, and flexure bounds were identified using a derivative-based detection algorithm. The logarithmic relationship between the flexure length and the ice thickness consistently deviated from the predicted 4/3 slope, and the calculated values of Young’s modulus ranged widely, often exceeding the physically plausible limits. Furthermore, the flexure limits identified by the algorithm showed inconsistencies with the idealized beam-bending model. These results indicate that the assumptions of linear elasticity, constant thickness, and simple geometry are likely violated in real flexure zones. More sophisticated modeling approaches are needed to accurately capture the mechanics of ice shelf flexure zones.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159949</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principal Slip Zones in Nature and Experiments and Their Role in the Earthquake Cycle</title>
<link>https://hdl.handle.net/1721.1/159948</link>
<description>Principal Slip Zones in Nature and Experiments and Their Role in the Earthquake Cycle
Ortega-Arroyo, Daniel
Earthquakes generally do not occur in intact rocks but rather within extremely narrow (≤10 cm) principal slip zones found along fault zones. These slip zones are relatively weaker than the surrounding wall rocks suggesting they play an important through the earthquake cycle. This thesis explores the microphysical processes occurring along principal slip zones and examines their influence in fault behavior from various perspectives and scales. Chapter 2 examines slickensides from three different fault systems, using laser profilometry to measure fault surface roughness and detailed microstructural analyses to identify the processes leading to these structures. Chapter 3 presents stick-slip experiments aimed at understanding the energy flow during earthquakes, quantifying the complete energy budget of individual events through a combination of microstructural analyses, novel magnetic field imaging, ultrasonic probing and numerical modelling. Chapter 4 involves Differential Scanning Calorimetry (DSC) measurements on ball-milled granite powders to investigate how extreme grain size reduction affects earthquake processes. Lastly, Chapter 5 presents DSC measurements of pseudotachylites aimed at constraining the thermal history of past slip-events. Results from this thesis highlight that the strain path significantly influences how the energy flows during the earthquake cycle, underscoring the importance of microstructural evolution in determining bulk sample behavior.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159948</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Quantum Magnetometry as an EmergingDetection Modality for Strategic Anti-Submarine Warfare</title>
<link>https://hdl.handle.net/1721.1/159947</link>
<description>Assessing Quantum Magnetometry as an EmergingDetection Modality for Strategic Anti-Submarine Warfare
Coy, Liam J.
Emerging technologies have been a subject of much consternation in the nuclear deterrence communities as people fear they might erode secure second strike capabilities. One such emerging technology is the idea of ‘quantum magnetometry’—magnetic field sensors which use quantum principles in order to obtain more precise measurements. Nuclear submarines can be detected by the magnetic distortions they cause in the background Earth magnetic field. Quantum magnetometers could enable more precise measurements of such distortions. However, the lack of certainty around the potential for this emerging technology has led to a lack of clarity in policy circles. This thesis explores some limits on the impact of quantum magnetometry in the context of strategic anti-submarine warfare (ASW). It does this in two parts. First, it provides a survey of quantum magnetometry technologies and developments. Second, it characterizes the magnetic anomaly associated with a nuclear submarine (according to the best unclassified estimates of key parameters). It finds that while quantum magnetometers may indeed result in more sensitive magnetic anomaly detectors, their impact on strategic ASW will be limited. Firstly, the magnetic anomaly associated with a submarine scales as the inverse cube of the distance from the submarine. Thus, a ten-fold decrease in the minimum field necessary to detect a submarine would only provide a slightly more than two-fold increase in detection range. Improvements in detection range would have to be quite significant to have any strategic impact, due to the vast areas of the ocean required to search to find submarines. Secondly, magnetometers are limited by more factors than solely their sensitivity. There are signal processing issues involved in determining whether a change in measured magnetic field is as a result of a target or some form of environmental noise. As such, while quantum magnetometry may indeed improve submarine detection capabilities, it is unlikely to do so in a manner that impactfully destabilizes nuclear deterrence.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159947</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the use of surface source banking to accelerate Monte&#13;
Carlo transport simulations of far-field particle fluxes</title>
<link>https://hdl.handle.net/1721.1/159946</link>
<description>Evaluating the use of surface source banking to accelerate Monte&#13;
Carlo transport simulations of far-field particle fluxes
Mowery, Eleni T.
In order to enhance the verifiability and usability of surface source banking as a far-field flux and dose simulation acceleration method for Monte Carlo neutron transport codes, two surface source stationarity criteria were developed and evaluated. Surface sources were considered defined, or an accurate proxy for fission sources in eigenvalue simulations once enough particles have been banked such that these criteria were met. One criterion utilizes multi-dimensional Shannon entropy to indicate the stationarity of the surface source in physical space and energy. The other criterion uses functional expansions to track the stationarity of Legendre coefficients associated with spatially-dependent banked effective neutron reaction rates with different Z materials. The completion of a test case with an OpenMC model of an MK2 TRIGA facility indicated agreement between the two criteria. Effects of oversampling a surface source that met the stationarity criteria, as well as potential limitations of the surface source banking method itself were also examined via the test case.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159946</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Machine Learning Approach to Robust Optimization: Theory and Applications</title>
<link>https://hdl.handle.net/1721.1/159945</link>
<description>A Novel Machine Learning Approach to Robust Optimization: Theory and Applications
Boucher, Benjamin
The increasing availability of data offers modelers unprecedented opportunities to improve decision-making. In particular, we can leverage machine learning based approaches to estimate parameters of optimization models, enabling more informed decisions. However, these models often inherit uncertainty from the data they are trained on, leading to unreliable decisions when deployed at face value. This body of work develops robust optimization frameworks that take into account these uncertainties, by bridging theory and practice to mitigate decision-making risk. This thesis is organized into three chapters.&#13;
&#13;
In Chapter 2, we present a robust scheduling approach tailored to hospital operations, where post-surgery recovery times are uncertain and right-skewed. Our method captures the underlying distribution of patients' length of stay by taking into account their surgery type, and without necessitating detailed patient-level features. Applied to the Bone and Joint Institute of Hartford Hospital’s elective surgery scheduling problem, our approach reduces the monthly peak census---freeing up valuable hospital beds and improving system flexibility in the face of emergencies.&#13;
&#13;
In Chapter 3, we introduce a general methodology for constructing uncertainty sets informed by the loss functions of machine learning models. These sets are designed to protect against prediction errors in estimated optimization parameters. Extending guarantees from the robust optimization literature, we derive strong guarantees on the probability of violation. Synthetic computational experiments show that our method requires uncertainty sets with radii up to one order of magnitude smaller than those of other approaches.&#13;
&#13;
Lastly, in Chapter 4, we apply robust optimization to the domain of recommendation systems, where user and item interaction data are often noisy or adversarially perturbed. We can improve model robustness by modifying the training loss to defend against worst-case inaccuracies in user preference data. Because our approach adds only a single trainable parameter to the optimization model, its runtime impact is negligible. To evaluate the effectiveness of our method, we apply our modified loss function to a suite of recommendation systems from the literature and show consistent improvements in the performance of these methods on synthetic and benchmark datasets, as well as diminished ranking sensitivity.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159945</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continual Learning for Engineering: Benchmarking and Exploring Strategies for 3D Engineering Problems</title>
<link>https://hdl.handle.net/1721.1/159944</link>
<description>Continual Learning for Engineering: Benchmarking and Exploring Strategies for 3D Engineering Problems
Samuel, Kaira M.
Engineering applications of machine learning often involve high-dimensional, computationally intensive simulations paired with limited and evolving datasets. As new designs and constraints emerge, models must adapt to incoming data without frequent retraining, which is often infeasible due to the cost of generating engineering data. Continual learning (CL) offers a promising alternative by enabling models to incrementally learn from sequential data while mitigating catastrophic forgetting, in which there is a loss of performance on previously seen examples. This thesis investigates the application of continual learning to regression-based engineering tasks, with an emphasis on surrogate modeling. We begin by benchmarking several foundational CL strategies, including regularization-based and rehearsal-based methods, across five diverse engineering datasets. To support this analysis, we construct nine new regression-focused continual learning benchmarks designed to reflect practical engineering scenarios. Results show that Experience Replay, a simple rehearsal method, consistently achieves strong performance, approaching "joint training" performance baseline of retraining from scratch, while substantially reducing computational cost. To further explore how rehearsal strategies can be made more efficient and effective, we propose two adaptive replay methods that prioritize memory samples based on forgetting dynamics. These methods extend previous adaptive replay strategies by using input clustering and representations from TabPFN, a foundation model for tabular data, to guide more informed sample selection without knowledge of experience boundaries. We evaluate their performance on both complex engineering datasets and controlled synthetic tasks. In scenarios where forgetting is unevenly distributed, the adaptive methods offer clear advantages, highlighting the potential for more intelligent replay under constrained resources. This work positions continual learning as a practical and effective strategy for handling dynamic engineering datasets, and offers new insights into how adaptive replay can enhance efficiency in data-limited, high-cost learning environments.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159944</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Order and Wavelet-Adaptive Immersed Methods for&#13;
PDEs on Complex Domain Geometries</title>
<link>https://hdl.handle.net/1721.1/159943</link>
<description>High-Order and Wavelet-Adaptive Immersed Methods for&#13;
PDEs on Complex Domain Geometries
Shen, Changxiao Nigel
The development of immersed methods brings a promising solution to the numerical simulation of interface-coupled multi-physics problems, such as multi-phase flows and fluidstructure interactions. This renders necessitates the design of novel high-order and efficient solvers based on immersed methods. This thesis examines two pivotal aspects of these methods: firstly, the acceleration of computational processes via adaptive resolution strategies; and secondly, the enhancement of accuracy order while sustaining numerical stability. To achieve the former, we develop a novel wavelet transform algorithm applicable to computational domains with arbitrary geometries. This wavelet transform maintains the order of the wavelet and serves as an indicator for local truncation error (LTE), resulting in an adaptive resolution strategy with explicit error control. To address the latter, we introduce a fifth-order upwind finite difference (FD) scheme that sustains numerical stability across any immersed interface discretization.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159943</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constrained and High-dimensional Bayesian Optimization with Transformers</title>
<link>https://hdl.handle.net/1721.1/159942</link>
<description>Constrained and High-dimensional Bayesian Optimization with Transformers
Yu, Rosen Ting-Ying
This thesis advances Bayesian Optimization (BO) methodology through two novel algorithms that address critical limitations in handling constraints and high-dimensional spaces. First, we introduce a constraint-handling framework leveraging Prior-data Fitted Networks (PFNs), a foundation transformer model that evaluates objectives and constraints simultaneously in a single forward pass through in-context learning. This approach demonstrates an order of magnitude speedup while maintaining or improving solution quality across 15 test problems spanning synthetic, structural, and engineering design challenges. Second, we propose Gradient-Informed Bayesian Optimization using Tabular Foundation Models (GITBO), which utilizes pre-trained tabular foundation models as surrogates for high-dimensional optimization (exceeding 100 dimensions). By exploiting internal gradient computations to identify sensitive optimization directions, GIT-BO creates continuously re-estimated active subspaces without model retraining. Empirical evaluation across 23 benchmarks demonstrates GIT-BO’s superior performance compared to state-of-the-art Gaussian Process-based methods, particularly as dimensionality increases to 500 dimensions. Together, these approaches establish foundation models as powerful alternatives to Gaussian Process methods for constrained and high-dimensional Bayesian optimization challenges.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159942</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transition Metal Heterogeneous Catalysis Towards Applications in Sustainable Energy: Leveraging Rational Design Principles for Activity, Stability, and Stereoselectivity</title>
<link>https://hdl.handle.net/1721.1/159941</link>
<description>Transition Metal Heterogeneous Catalysis Towards Applications in Sustainable Energy: Leveraging Rational Design Principles for Activity, Stability, and Stereoselectivity
McCormack, Kaylee Lynn
As global demand grows for renewable energy storage and conversion technologies, novel methods of storing energy and providing portable power are a necessity to accommodate variability in energy resources. Heterogeneous catalysis is a fundamental driver in the development of direct liquid fuel cells, water electrolyzers, and other sustainable energy storage applications including liquid organic hydrogen carriers (LOHCs). &#13;
The methanol oxidation reaction (MOR) is a multistep reaction comprised of methanol dehydrogenation leading to CO adsorbed on the catalyst surface, followed by CO oxidation. Incorporation of an oxophilic material that facilitates the formation of OH groups on the surface is highly effective for improving CO oxidation and MOR performances. Thus in addition to the enhanced MOR activity through incorporation of the carbide core beneath the monolayers of Pt, the performance of these catalysts is expected to increase further by adding Ru atoms to the Pt shell, resulting in an overall 10 times enhancement in mass activity compared to commercial DMFC catalysts.&#13;
&#13;
Metal hydroxide organic frameworks (MHOFs) comprise edge-sharing metal hydroxide octahedra layers interconnected by carboxylate linkers which utilize pi-pi stacking to impart additional stability for electrochemical applications including the oxygen evolution reaction (OER). However, we discovered that there are definitive limits to this stability. This work explored the underlying processes causing loss of MHOF-specific motifs, which lead to phase transformations from MHOF to the Ni oxyhydroxide-like phase during OER, providing insight into the phase stability of these types of materials in base. During extended electrochemical OER cycling, linkers leach from the MHOF structure, exposing more electrochemically active Ni sites, thereby increasing the geometric OER activity. The linker leaching was observed to be accelerated by Ni²⁺ to Ni³⁺/⁴⁺ oxidation, which leads to a phase transformation from MHOF to NiOOH₂₋ₓ structure. A phase transformation mechanism is proposed where mono-μ-oxo bridge motifs found only in the MHOF structure convert to di-μ-oxo bridge motifs in the Ni oxyhydroxide-like phase. MHOFs with the weaker pi-pi interaction L1 linker underwent full transformation to this Ni oxyhydroxide-like phase. Meanwhile, the MHOFs with the stronger pi-pi interaction L4 linker showed transformations to Ni oxyhydroxide-like phases only at near surface regions, where the MHOF can remain as a less active core, thereby identifying NiOOH₂₋ₓ as the OER active phase, but highlights the potential of stability these MHOF materials for alkaline water oxidation. &#13;
&#13;
Finally, MHOFs present unique opportunities as sacrificial templates for thermocatalysis, with adjustable metal centers, structural robustness, and heteroatom incorporation through linker selection. In this thesis I present a model for using MHOFs and analogous MOFs to generate catalysts with unique catalytic properties which differentiate them from typical Ni hydrogenation catalysts. The MHOF-based catalysts perform similarly to other Ni-based catalysts in naphthalene and tetralin to decalin conversion rates per active site, however with a notable stereoselectivity toward cis-decalin across compared to the other Ni catalysts. This work highlights Ni-MHOFs as precursors for transition metal catalysts that emulate the stereoselectivity of NM catalysts, thereby reducing energy requirements in LOHC dehydrogenation.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159941</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oppenheimer-Snyder Collapse in the BSSN Formalism</title>
<link>https://hdl.handle.net/1721.1/159940</link>
<description>Oppenheimer-Snyder Collapse in the BSSN Formalism
Leonard, Aidan J.
In general relativity, problems with high degrees of symmetry often serve as illustrative simplifications of complicated scenarios. Oppenheimer-Snyder collapse, an exact solution for the gravitational collapse of a uniform, pressure-less ball of dust into a black hole, provides valuable insight into the collapse of realistic mass distributions such as stars. Early numerical relativity simulations demonstrated that a rotating ball of dust collapses into a Kerr black hole. In this thesis, we formulate the collapse of a slowly rotating dust-ball using the BSSN framework from numerical relativity, with the aim of reproducing this result in a simple manner. By perturbing the Oppenheimer-Snyder solution in isotropic coordinates, we find semi-analytic solutions to the constraint equations at linear order in angular momentum. In addition, we develop a Mathematica simulation code for modeling of spherical vacuum systems using the BSSN formalism. Diagnostics provide comparison of our results with theoretical predictions for the simplified case of a stationary black hole. Further work is required to introduce matter terms and move from spherical to axial symmetry.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159940</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Regulation of Metabolic Flux Using Orthogonal Quorum&#13;
Sensing</title>
<link>https://hdl.handle.net/1721.1/159939</link>
<description>Dynamic Regulation of Metabolic Flux Using Orthogonal Quorum&#13;
Sensing
Ream, Michael James
Dynamic regulation allows engineers to direct metabolic flux and cellular resources towards target pathways, improving production of value-added chemicals. One dynamic regulation strategy is quorum sensing (QS), a cell-to-cell communication that allows populations of cells to function as a collective. By applying QS to engineered pathways, the diversion of metabolic resources can be coupled to the population of the culture, thereby ensuring sufficient growth is achieved. These circuits can then be layered to allow for fine-tuned control of the cell.&#13;
&#13;
Previous research has focused on QS systems that utilize acyl homoserine lactones (AHL) as signaling molecules. These systems are well characterized, but pairing them in layered systems is difficult due to similarities in signals, which can cause unintended switching of the opposing control system. Here, we identified orthogonal AHL systems for an independently-controlled, multi-layered regulation circuit, which was then applied to increase the production of the valuable natural products of naringenin and bisnoryangonin in Escherichia coli. To our knowledge, the resulting regulations led to the highest extracellular titers at the flask scale with a final naringenin titer of 1251.2 +/- 59.6 mg/L and a bisnoryangonin titer of 597.7 +/- 18.3 mg/L in naringenin equivalence.&#13;
&#13;
In a parallel effort to obtain orthogonal QS-based regulations, we focused on expanding the available QS systems for the model organism E. coli. Specifically, the Gram-positive QS systems of Agr from Staphylococcus aureus and Com from Bacillus subtilis were implemented and subsequently improved for functionality in E. coli. These systems have tight control of expression, which was demonstrated by dynamic downregulation of the aromatic amino acid pathways via CRISPRi. The efficacy of these systems in synthetic biology was further illustrated by using T7 RNA polymerase to amplify the expression output of an Agr-controlled circuit.&#13;
&#13;
Overall, this work developed and applied QS-based regulation systems to improve microbial production of value-added chemicals in E. coli.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159939</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Single-Chain Polymer Nanoparticles to Mimic Globular Proteins</title>
<link>https://hdl.handle.net/1721.1/159938</link>
<description>Design of Single-Chain Polymer Nanoparticles to Mimic Globular Proteins
Jin, Tianyi
While globular proteins exhibit an impressive range of precise functionalities, their sensitivity to environmental changes has motivated scientists to pursue two complementary strategies: (1) engineering and designing proteins directly or indirectly, and (2) exploring synthetic alternatives with higher stability. Single-chain polymer nanoparticles (SCNPs) based on random heteropolymers (RHPs) have emerged as a promising platform as both protein stabilizers and mimetics. However, theoretical understanding of the origins underlying their functional versatility has lagged behind experimental advances. Unlike natural proteins, which rely on well-defined sequences and threedimensional structures, RHPs achieve their functions through sequence and structure ensembles. In this thesis, I use multiscale molecular simulation techniques to uncover the molecular origins of the versatile, protein-mimetic functions of RHPs. This work is motivated by recent experimental findings showing that four-component methacrylate-based (MMA-based) RHPs can function as catalysts, proton channels, and chaperonins. By comparing the behaviors of MMA-based RHPs with that of globular proteins, I provide fundamental physicochemical insights and design principles for SCNPs as protein mimetics and stabilizers. I highlight the significance of chemical polarity and nuances in materials design. In Part I, I study the self-assembly and dynamics of MMA-based RHPs in both melt and solution. I show that MMA-based RHPs collapse into compact globular structures with dynamical heterogeneity and slow dynamics due to glassy backbone. Properties including compactness, monomer hydration, and potential to stabilize membrane protein are largely insensitive to sequence, but strongly depend on composition. At the core of their behavior lies a phenomenon known as hydration frustration, where polar groups become dehydrated and hydrophobic groups remain hydrated. This is a key feature observed in globular proteins. This effect arises from a negative Flory–Huggins interaction parameter (&#120594;) between methyl methacrylate and polyethylene glycol in MMA-based RHPs. Guided by these insights, I design a biodegradable, polyester-based RHP that exhibits similar properties in silico. I further map the potential energy landscape of these RHPs through microsecond simulations. In Part II, I study the adsorption and stabilization behaviors of MMA-based RHPs on both synthetic and biological surfaces. I show that adsorption onto graphene and non-specific binding to &#120573;-barrel membrane proteins are primarily through side-chain interactions, with limited backbone reconfiguration. The transition from a globular to a wrapped morphology is hindered by internal friction arising from the deformation of the glassy backbone. I demonstrate that population-based stabilization of &#120573;-barrel proteins is mediated through loop-specific contacts that reduce fluctuations in flexible regions. The findings of this thesis provide a comprehensive framework for understanding and designing synthetic protein mimetics and stabilizers. MMA-based RHPs present a promising alternative to natural proteins, offering greater resilience, improved cost-effectiveness, and enhanced scalability. The structural and functional parallels between RHPs and globular proteins suggest that the principles uncovered here may generalize across a broad class of biomimetic and bio-synthetic hybrid systems. This work lays the foundation for the rational design of SCNPs for emerging applications
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159938</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Fourier-Bessel Series and Hard Edge Limits</title>
<link>https://hdl.handle.net/1721.1/159937</link>
<description>The Fourier-Bessel Series and Hard Edge Limits
Lerner-Brecher, Matthew
The universality classes defined by the Airy and Bessel kernels are two of the most fundamental in random matrices and growth models more generally. Broadly speaking, one often encounters the Airy kernel when studying models where the relevant eigenvalues or particles are unbounded, and the Bessel kernel when examining their constrained counterparts. In this thesis, we analyze two recent problems where the relevant expressions involve a variant of the Airy functions known as the Fourier-Airy series. In both cases, we find that the constrained versions have natural analogues expressible in terms of the Fourier-Bessel series echoing the relationship between the Airy and Bessel kernels. In the first part, we study the hard edge limit of a multilevel extension of the Laguerre β-ensemble at zero temperature. In particular, we show that asymptotically the ensemble is given by Gaussians with covariance matrix expressible in terms of the Fourier-Bessel series. These Gaussians also have an explicit representation as the partition functions of additive polymers arising from a random walk on roots of the Bessel functions. Our approach builds off of techniques introduced by Gorin and Kleptsyn [1] and is rooted in using the theory of dual and associated polynomials to diagonalize transition matrices relating levels of the ensemble. Like the corresponding soft edge limit in the Hermite case studied by Gorin and Kleptsyn, the object we introduce should represent a new universality class for zero temperature random matrices. In the second part, we introduce a new diffusion process which arises as the n → ∞ limit of a Bessel process of dimension d ≥ 2 conditioned upon remaining bounded below one until time n. In addition to being interesting in its own right, we argue that the resulting diffusion process is a natural hard edge counterpart to the Ferrari-Spohn diffusion of [2].
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159937</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative embeddings with applications</title>
<link>https://hdl.handle.net/1721.1/159936</link>
<description>Quantitative embeddings with applications
Portnoy, Elia
In this thesis, we discuss quantitative embeddings that generalize a theorem of Kolmogorov and Barzdin. The theorem says that any bounded degree graph with V vertices can be mapped into a 3-dimensional ball of radius sqrt(V), so that at most a constant number of edges intersect any unit ball. In one generalization we describe how much freedom we have in placing the vertices of the graph, and in the other we prove a similar result for simplicial complexes of any dimension. We also discuss applications of these quantitative embeddings to a problem in metric geometry related to the isoperimetric inequality and a problem about constructing local quantum error-correcting codes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159936</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-rank 1 Arithmetic Siegel--Weil</title>
<link>https://hdl.handle.net/1721.1/159935</link>
<description>Co-rank 1 Arithmetic Siegel--Weil
Chen, Ryan C.
We prove the arithmetic Siegel–Weil formula in co-rank 1, for Kudla–Rapoport special cycles on exotic smooth integral models of unitary Shimura varieties of arbitrarily large even arithmetic dimension. We also propose a construction for arithmetic special cycle classes associated to possibly singular matrices of arbitrary co-rank. Our arithmetic Siegel–Weil formula implies that degrees of Kudla–Rapoport arithmetic special 1-cycles are encoded in near-central first derivatives of unitary Eisenstein series Fourier coefficients. The key input is a new limiting method at all places. On the analytic side, the limit relates local Whittaker functions on different groups. On the geometric side at nonsplit non-Archimedean places, the limit relates degrees of 0-cycles on Rapoport–Zink spaces and local contributions to heights of 1-cycles in mixed characteristic.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159935</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon flow and food web structure in the mesopelagic zone of the North Atlantic Ocean</title>
<link>https://hdl.handle.net/1721.1/159934</link>
<description>Carbon flow and food web structure in the mesopelagic zone of the North Atlantic Ocean
Gardner, Kayla Grace
Mesopelagic ecosystems are vital habitats that link the euphotic zone and the deep ocean through food web interactions and carbon flow pathways. In this dissertation, I use carbon compound-specific stable carbon isotope analysis of amino acids (CSIA-AA) and DNA gut metabarcoding methodologies to provide a broad ecological outlook on mesopelagic carbon flow coupled with finer scale taxonomic details. In Chapter 2, I analyze the diets of seven abundant mesopelagic fish species by combining the integrative power of CSIA-AA with the instantaneous, taxonomic aspects of DNA gut metabarcoding. Three primary diet types were identified: copepod-based, fish-based, and generalist. Additionally, carbon sources were variable across the two years, but cyanobacteria were consistently an important carbon source - evidence that mesopelagic fish are essential exporters in weaker biological pump systems. Finally, this chapter includes cyanobacteria CSIA-AA signature data that was previously missing from the literature. In Chapter 3, I augment the CSIA-AA data by adding genus-level zooplankton data and samples from the winter season. Zooplankton were more dispersed among all the end members than fish, particularly in the winter. Fish, however, still relied the most on cyanobacteria-sourced carbon. This chapter supplies the first zooplankton carbon CSIA-AA data set at such a fine taxonomic resolution. In Chapter 4, I examine the effect of phytoplankton community structure on fish and zooplankton carbon sources by sampling before and during diatom a diatom bloom. Zooplankton, and to a lesser extent fish, showed a shift to diatom-based carbon sources during the bloom. As a whole, this dissertation advances our knowledge of mesopelagic food webs by providing a baseline carbon-CSIA-AA data set for key zooplankton and fish species across several seasons that will inform ecological models to understand how the mesopelagic will react to anthropogenic pressure.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159934</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometrically-informed methods of wave-based imaging</title>
<link>https://hdl.handle.net/1721.1/159933</link>
<description>Geometrically-informed methods of wave-based imaging
Greer, Sarah Yvonne
In this thesis, we are interested in understanding and advancing wave-based imaging techniques defined by the adjoint-state method. Wave-based imaging uses wavefield data from receivers on the boundary of a domain to produce an image of the underlying structure in the domain of interest. These images are defined by the imaging condition, derived from the first-order adjoint-state method, which corresponds to the gradient and maps recorded data to their reflection points in the domain. In the first part, we introduce a nonlinear modification to the standard imaging condition that can produce images with resolutions greater than that ordinarily expected using the standard imaging condition. We show that the phase of the integrand of the imaging condition, in the Fourier domain, has a special significance in some settings that can be exploited to derive a super-resolved modification of the imaging condition. Whereas standard imaging techniques can resolve features of a length scale of λ, our technique allows for resolution level R ă λ, where the super-resolution factor (SRF) is typically λ{R. We show that, in the presence of noise, R „ σ. In the second part, we investigate the Hessian operator, which arises from the second-order adjoint-state method, in the context of full-waveform inversion, a non-linear least-squares problem for estimating material properties within the domain of interest. We analyze the contributions of reflected and transmitted waves to the linearized Hessian operator, demonstrating that reflected waves generally produce a high-rank component with well-distributed eigenvalues, while transmitted waves contribute to a low-rank component with poorly distributed eigenvalues. This decomposition of the Hessian, motivated by the underlying physical system, provides insights that can be used to improve inversion strategies. The advancements in both parts of this thesis leverage the underlying structure and geometry of the domain of interest, providing the foundation for the zero-phase imaging condition in the first part and informing the decomposition of the Hessian operator in the second part.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159933</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry and analysis of Ricci curvature and mean&#13;
curvature flows</title>
<link>https://hdl.handle.net/1721.1/159932</link>
<description>Geometry and analysis of Ricci curvature and mean&#13;
curvature flows
Zhao, Xinrui
In this thesis, we study the geometry and analysis of spaces with Ricci curvature bounded below from the following three perspectives and the asymptotically conical singularities of mean curvature flows in the following two perspectives. For the spaces with Ricci curvature bounded below, firstly we study the unique continuation problem on RCD spaces, which is a long-standing open problem, with little known even in the setting of Alexandrov spaces. Together with Qin Deng, we proved that on RCD(K,2) spaces both harmonic functions and caloric functions satisfy weak unique continuation properties. Furthermore we constructed counter-examples showing that strong unique continuation in general fails for harmonic and caloric functions on RCD(K,N) spaces where N is greater or equal to 4. Secondly, we consider constructing a canonical diffeomorphism between the n-sphere and a n-dimensional space with Ricci curvature bounded from below by n-1 which is close to the n-sphere in the Gromov-Hausdorff sense. Together with Bing Wang we proved that the first (n+1)-eigenfunctions of Laplacian provides a bi-Holder diffeomorphism and we further give a counter-example showing that the bi-Holder estimate is sharp and cannot be improved to a bi-Lipschitz estimate. Thirdly, we study the Margulis Lemma on RCD spaces. Together with Qin Deng, Jaime Santos-Rodríguez and Sergio Zamora, we extend the Margulis Lemma for manifolds with lower Ricci curvature bounds to the RCD setting. As one of our main tools, we obtain improved regularity estimates for Regular Lagrangian flows on these spaces. For the asymptotically conical singularities of mean curvature flows, firstly together with Tang-Kai Lee, we proved asymptotically conical self-shrinkers as tangent flows of MCFs are unique, generalizing the result in the case of hypersurface proven by Chodosh-Schulze. Secondly, together with Tang-Kai Lee we prove that given any asymptotically conical shrinker, there exists an embedded closed hypersurface such that the mean curvature flow starting from it develops a type I singularity at time 1 at the origin modeled on the given shrinker.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159932</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Movement and Trophic Ecology of Large Pelagic Fishes Connecting Surface Waters with the Ocean's Twilight Zone</title>
<link>https://hdl.handle.net/1721.1/159931</link>
<description>Movement and Trophic Ecology of Large Pelagic Fishes Connecting Surface Waters with the Ocean's Twilight Zone
Willis, Ciara Sinead Roche
The ocean’s twilight zone is a vast area of the global ocean that lies between the sunlit surface waters and perpetually dark midnight zones, covering depths from ~200 to 1000 meters. Recent work in the twilight (or mesopelagic) zone has revealed unexpected biomass and diversity that may not only challenge scientific understanding of ocean systems but also provide new and largely untapped resources for fisheries harvest. The extent to which commercially valuable, highly migratory top predators such as tuna and swordfish rely on mesopelagic biomass for forage has not previously been quantified but is thought to be substantial. Pressure from emerging industrial fisheries in the twilight zone makes determining the linkages between mesopelagic prey and migratory predators of pressing concern for sound management in keeping with the precautionary principle. Ocean predators are further hypothesized to dive into the deep ocean for a range of motives beyond forage, including for navigation on their long migrations. In this thesis, I begin by using compound-specific stable isotope analysis to trace the flow of carbon through pelagic ecosystems in the northwest Atlantic to three predators: bigeye tuna (Thunnus obesus), swordfish (Xiphias gladius), and yellowfin tuna (Thunnus albacares). I confirm the presumed high reliance of these predators on mesopelagic prey using a Bayesian mixing model approach that estimated 50-60% of their temperate carbon is sourced from mesopelagic food webs. Next, I take a larger view of epi- and mesopelagic food webs by sampling simultaneously across a pelagic food web from bottom to top at one point in time and space in the northwest Atlantic Ocean. I trace the movement of carbon and nitrogen from particulate organic matter, through mid-level consumers, up to top predators using compound-specific stable isotope analysis of amino acids. Nitrogen stable isotope analyses is also used to calculate trophic positions, providing a more detailed view of pelagic food web structure and function. To complement these trophic studies, I conduct a movement analysis of vertical habitat use by swordfish focused on their intermittent extreme dives. I explore possible motivations for these dives, including forage, predator avoidance, and navigation. Qualitative investigation of dive geometry, as well as quantitative logistic models of the physical and biological environment, indicate that navigation is the most likely motive. Finally, I consider the implications of predator reliance on mesopelagic forage in a fisheries economics context. Using my earlier diet sourcing results, I adapt a bioeconomic model with a new predator-prey dynamic to evaluate the effects of potential mesopelagic fisheries on their predators with bigeye tuna as the representative predator. Model results highlight the importance of recognizing predator-prey interactions in management of mesopelagic fisheries and demonstrate the sensitivity of equilibrium economic and ecological conditions for the tuna stock under different price and cost scenarios. Overall, these studies emphasize the importance of the deep ocean to marine predators and suggest that a new mesopelagic fishery could be economically viable in-and-of itself but may have significant negative impacts on existing tuna and swordfish fisheries due to reduced forage.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159931</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Theory to Practice: Improving Causal Conclusions from Healthcare Data</title>
<link>https://hdl.handle.net/1721.1/159930</link>
<description>From Theory to Practice: Improving Causal Conclusions from Healthcare Data
Cobzaru, Raluca-Ioana
Causal inference in biomedical, epidemiological, and health policy research often relies on observational data, such as electronic health records (EHRs), patient registries, or insurance claims, to compensate for the inaccessibility of randomized controlled trials. However, causal inference from observational data depends on strong, often unverifiable assumptions, including exchangeability, parallel trends, and correct model specification. Violations of these assumptions can bias treatment effect estimates, making it essential to assess the sensitivity of causal conclusions--particularly in healthcare applications, where properly interpreting causal relationships and delivering reliable insights is critical for guiding clinical practice and informing system-wide decisions. This thesis contributes to the theoretical and empirical analysis of causal methods under realistic data limitations, with a focus on covariate selection and adjustment, proximal inference for unobserved confounding, and applications of modern estimation techniques to healthcare-relevant settings.&#13;
In Chapter 2, we investigate the performance and robustness of state-of-the-art machine learning estimators for causal inference when covariate selection for statistical adjustment is performed in a realistically suboptimal manner. Although nonparametric doubly robust methods are asymptotically unbiased, they can perform poorly in finite samples due to slow convergence of nuisance function estimates. Through an extensive simulation study, built upon previous research on statin use and atherosclerotic cardiovascular disease (ASCVD) incidence, we examine how including extraneous covariates---a likely risk when researchers over-adjust to mitigate concerns about unmeasured confounding---may degrade estimator performance. These findings highlight the importance of incorporating domain knowledge to guide covariate selection, even when using flexible data-adaptive methods.&#13;
In Chapter 3, we explore proximal causal inference, a novel framework designed to address unobserved confounding by leveraging negative control exposures and outcomes to recover the true causal effects. While this approach offers an alternative to the exchangeability assumption, it relies on identification conditions for the proxy variables set and model specifications that remain empirically untestable. We derive closed-form bias expressions under a linear structural equation model to quantify the impact of violating these assumptions and propose a practical bias adjustment strategy using data from an observational ICU study. These results provide a foundation for formal sensitivity analysis and offer insight into the real-world utility of proximal methods.&#13;
Finally, in Chapter 4 we evaluate the impact of the Meaningful Use Incentive Program on hospital performance, using modern causal methods in a multi-period difference-in-differences (DiD) design. We apply a staggered DiD estimation framework, along with a sensitivity analysis of dynamic treatment effect estimates under potential violations of the parallel trends assumption, across a wide range of quality, safety, and process of care measures. By accounting for treatment timing variation, allowing for heterogeneous effects over a longer follow-up period, and testing for violations of identifying assumptions, our study offers a more rigorous and comprehensive assessment of the causal impact of health information technology (IT) policies introduced by the Meaningful Use program. Our findings help reconcile mixed findings in the literature and inform the design of future hospital incentive programs that aim to promote advanced use of EHRs.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159930</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic insights into how collective effects mediate the T cell response</title>
<link>https://hdl.handle.net/1721.1/159929</link>
<description>Mechanistic insights into how collective effects mediate the T cell response
Yin, Rose
T cells play an important role in the adaptive immune system by providing robust responses to foreign pathogens while avoiding widespread autoimmunity. Although many specific microscopic factors are thought to contribute to this self/non-self discrimination, based on a theoretical paper and experiments, a generalized mechanistic framework has emerged over the past decade to describe the remarkable robustness of self/non-self discrimination in spite of the presence of autoimmune T cells in every host. This quorum threshold mechanism states that a threshold number of T cells (a quorum) must be activated by a foreign antigen in a local area for an immune response to ensue. In my thesis, I use analytical and computational models to show how this mechanism enables a response against foreign pathogens while tolerating exposure to self-tissue, and how it increases robustness against perturbations such as changed self-antigen presentation or increased epitope spreading due to inflammation. However, under persistent or severe infections, these models also show that the risk of autoimmunity increases through enhanced sampling of rare epitopes and activation of cross-reactive T cells. These results provide a potential explanation for why persistent infections often trigger autoimmune diseases. To further understand the emergence of the quorum threshold, I developed a population dynamics model. Our results show that steady states corresponding to an effective or ineffective immune response are separated by a threshold dependent on both activated T cell population concentration and concentration of a growth factor (IL2) that is secreted by T cells and absorbed by cells that dampen the immune response. Notably, the threshold’s existence proves robust across randomized parameters, highlighting its fundamental role in regulating T cell responses.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159929</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uniqueness of p-local truncated Brown-Peterson spectra</title>
<link>https://hdl.handle.net/1721.1/159928</link>
<description>Uniqueness of p-local truncated Brown-Peterson spectra
Lee, David Jongwon
When p is an odd prime, we prove that the Fp-cohomology of BP⟨n⟩ as a module over the Steenrod algebra determines the p-local spectrum BP⟨n⟩. In particular, we prove that the p-local spectrum BP⟨n⟩ only depends on its p-completion BP⟨n⟩p̂. As a corollary, this proves that the p-local homotopy type of BP⟨n⟩ does not depend on the ideal by which we take the quotient of BP. In the course of the argument, we show that there is a vanishing line for odd degree classes in the Adams spectral sequence for endomorphisms of BP⟨n⟩. We also prove that there are enough endomorphisms of BP⟨n⟩ in a suitable sense. When p = 2, we obtain the results for n ≤ 3.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159928</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing microearthquakes and shallow structure with dense array and optical fibers</title>
<link>https://hdl.handle.net/1721.1/159927</link>
<description>Characterizing microearthquakes and shallow structure with dense array and optical fibers
Chang, Hilary
Source properties of small earthquakes, such as source dimension and stress drop, help us to constrain source physics and assess seismic hazards. Small events carry information about the stress state in the subsurface. They also help us predict the behavior of larger earthquakes. However, the source properties of small earthquakes (magnitude less than 3) are poorly constrained because of trade-offs with other wave propagation effects. The trade-offs with attenuation can cause the apparent stress drop to vary, resulting in an apparent breakdown of earthquake self-similarity. To date, researchers are still trying to understand the uncertainty in source parameter measurements and to improve their accuracy. In the first part of the thesis, I use a dense array in Oklahoma to investigate the influence of site effects on source parameter modeling. By analyzing ground motions, subsurface velocity structure, and attenuation, I show how these factors relate to site effects, and how source parameter estimations vary under different modeling assumptions. To avoid large site-effect-related biases and uncertainties when modeling source parameters, I suggest (1) assuming a realistic attenuation model, (2) using selected stations on hard rocks instead of using many stations with unknown site conditions, and (3) constraining variables in the model during the inversion to avoid parameter trade-offs.&#13;
&#13;
In the second part of the thesis, I explore the use of fiber-optic cables in several seismic applications. Distributed Acoustic Sensing (DAS) turns optical fibers into dense receiver arrays. These fiber-optic cables have the advantage of being resilient and easier to maintain compared to mechanical sensors. The cable provides a dense array that helps us separate source and wave propagation effects for different purposes. Here, I use cables in wells in geothermal reservoirs and a telecom cable on the MIT campus. The applications include structure monitoring and imaging, seismic hazard assessment, and earthquake source characterization. DAS measures strain and requires special considerations to fit into conventional seismic methods built on particle motions. Deconvolution-based methods help deal with the DAS instrument response. The gauge length adds a velocity-dependent amplitude response that we need to consider when modeling the DAS spectrum. I provide workflows for conducting seismic imaging surveys using telecom cable and downhole DAS for temporal monitoring and source parameter analysis. The cables can reach places that were difficult to reach in the past. With careful processing, DAS can be a promising tool for structure monitoring, urban seismic hazard assessment, and microearthquake source analysis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159927</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Regimes for Topology Optimization in Photonics</title>
<link>https://hdl.handle.net/1721.1/159926</link>
<description>New Regimes for Topology Optimization in Photonics
Chen, Mo
Inverse design is a powerful methodology to obtain non-trivial and non-intuitive photonic structures of unprecedented performance. Topology optimization is a particular class of inverse design method that has been increasingly popular in photonics. Numerous topology optimization tools and frameworks have been developed and often yield satisfying results for various engineering problems. This work explores the subtleties involved in the development and application of topology optimization, and presents new regimes for photonic design, where the key to finding the right solutions lies in posing the right questions. To begin with, we first review the current frameworks for photonic topology optimization. We point out that, as new algorithms emerge, the lack of standardized validation methods presents a challenge for further advancements. To address this, we provide a comprehensive suite of test problems along with a length-scale metric for comparing designs across different algorithms, aiming to facilitate the development and validation of future inverse design approaches. However, a functioning inverse design algorithm alone is not sufficient to guarantee satisfying designs. We present two case studies highlighting the importance of careful formulation for achieving mathematical robustness and tractability that is crucial to the success of optimization. The first case examines the inverse design of 3D-printable metalenses with complementary dispersion for terahertz imaging. It illustrates a physical dichotomy between achieving two distinct dispersion behaviors in a thin structure. We demonstrate that a key aspect in making such design tractable is carefully balancing the trade-offs between focal quality and scanning rate in the optimization problem formulation. The second case focuses on the inverse design of multiresonance filters via quasi-normal mode theory. Traditional filter design approaches have various limitations, and directly applying topology optimization leads to numerically stiff formulations. We propose a new practical high-order-filter design method based on a minimal set of analytical design criteria derived from quasi-normal mode theory. We illustrate our approach by designing 3rd and 4th-order elliptic and Chebyshev dielectric filters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159926</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Ex-Situ Carbon Mineralization Fundamentals andScalability in Wet Supercritical Carbon Dioxide</title>
<link>https://hdl.handle.net/1721.1/159925</link>
<description>Investigation of Ex-Situ Carbon Mineralization Fundamentals andScalability in Wet Supercritical Carbon Dioxide
Fong, Andy
With the world on average a degree Celsius warmer than the preindustrial 1850s, mitigating global warming is an urgent yet technically complex goal. Carbon mineralization, the trapping of carbon dioxide in the mineral phase via reaction with magnesium/calcium-rich silicates to form carbonates, is a promising method to offset carbon emissions. In addition to the reaction with carbon dioxide dissolved in water, these silicates have also been shown to be reactive with water-saturated (wet) supercritical carbon dioxide, but the rates are poorly constrained. In this study, we investigate the kinetics of carbonating olivine, a magnesium silicate-rich mineral, in wet supercritical carbon dioxide at 90 bars between 50 to 170 °C over time. We find that we can sufficiently model the complex dependence of olivine carbonation rates with a nucleation-crystallization mechanism for any temperature and grain size at 90 bars, although more data is necessary to confirm the model’s accuracy. Using our developed model, we predict that we can achieve near-complete carbonation at 300 °C using &lt;10 micron olivine grains in a single day. Scaling of the process suggests that between 0.8 and 1.3 MWh is required per ton of carbon dioxide captured (via amine scrubbing) and sequestered as carbonates, depending on the energy source, which is comparable to recent carbon mineralization strategies such as those proposed by Carbfix. Furthermore, the silica and carbonate product can be utilized for various industrial applications totaling to an estimated $461 million for 0.7 megatons of carbon dioxide mineralized, which is Carbfix’s 2028 carbon capture goal. Alternatively, aquatic storage of sequestered carbon dioxide can enhance carbon sequestration by up to twofold, leading to between 0.4 and 1.3 MWh per ton of carbon dioxide captured and stored, although such an initiative would be economically inviable. We anticipate that large-scale carbon mineralization initiatives in the future need to be both effective and profitable for continued operation. Therefore, this initial evaluation of carbon mineralization in wet supercritical carbon dioxide reveals its environmental and economic viability at large scale. Application of wet scCO₂ carbonation to other minerals such as serpentine or basalt will ultimately require testing their competency to be carbonated in such conditions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159925</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Progress on the Interplay of Machine Learning and Optimization</title>
<link>https://hdl.handle.net/1721.1/159924</link>
<description>Progress on the Interplay of Machine Learning and Optimization
Lin, Zhen
Machine learning and optimization have been playing significant roles in the world. Despite the remarkable advancements in these fields, various crucial problems remain unsolved. In this thesis, we address some of these problems by exploring the interplay of machine learning and optimization.&#13;
In the first part of this thesis, we utilize optimization tools to address two practically important and critical topics in machine learning: interpretability of machine learning models, and improving data for prediction. In Chapter 2 and 3, we focus on improving the interpretability of machine learning models. In particular, Chapter 2 presents an efficient algorithm for training high-quality Nonlinear Oblique Classification Trees using gradient descent. We demonstrate on real-world datasets this is an effective approach. In Chapter 3, we develop an optimization approach to train low depth (up to depth 8) classification trees with hyperplanes to closely approximate neural networks. We also incorporate sparsity in the hyperplanes of the trees. In this way, we contribute in increasing the interpretability of neural networks. Computational results on real-world datasets with different sizes of neural networks show the effectiveness of our algorithm. In Chapter 4, we propose an integer optimization method to improve class-imbalanced data. Our method undersamples the majority class and performs better than existing methods on real-world imbalanced datasets.&#13;
In the second part of the thesis, we explore the direction of applying machine learning to optimization. In Chapter 5, we show that optimization methods can significantly benefit from a machine learning treatment. We develop a model-based trust-region method for derivative-free optimization problems under noise. Our method, which uses robust and sparse regression to build models of functions, is much more robust and has higher scalability than existing methods.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159924</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Starting Material-Oriented Strategies in Computer-Aided Synthesis Planning With a Bidirectional Search Algorithm</title>
<link>https://hdl.handle.net/1721.1/159923</link>
<description>Enabling Starting Material-Oriented Strategies in Computer-Aided Synthesis Planning With a Bidirectional Search Algorithm
Yu, Kevin
Retrosynthesis, in which one proposes a reaction pathway towards a target molecule from simpler starting materials, is a fundamental task in synthetic chemistry. Current computational search methods assume the sufficiency of reaching arbitrary building blocks but fail to address the common real-world constraint where the use of specific starting materials is desirable. To this end, this thesis reformulates computer-aided retrosynthesis as a starting material-constrained problem, in which one or more starting materials are given as input in addition to the target structure. Under this formulation, we are able to apply novel strategies to more efficiently navigate the combinatorial explosion of reactions to consider during synthesis planning. First, we demonstrate how training on multi-step synthesis routes inferred from a reaction base allows a neural network to predict the number of steps needed to synthesize targets from other specified building blocks. Using this learned value function in combination with recent advances in bottom-up synthesis planning, this thesis proposes a novel bidirectional CASP algorithm, DESP (Double-Ended Synthesis Planning). We demonstrate the utility of DESP through a number of empirical benchmarks and case studies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159923</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Modeling Approaches to Quantify Vehicle-to-Grid Services in an Evolving Power Sector</title>
<link>https://hdl.handle.net/1721.1/159922</link>
<description>Integrated Modeling Approaches to Quantify Vehicle-to-Grid Services in an Evolving Power Sector
Owens, James
The U.S. transportation sector emitted 27% of nationwide greenhouse gas (GHG) emissions in 2020. In addition to cleaner fuels and more efficient powertrains, vehicle electrification is poised to be a key driver of sector decarbonization. However, fleet electrification poses an unprecedented coupling of the transportation sector and electric grid. Electric vehicle charging and other new loads, if not sufficiently managed, are anticipated to add significant strain to the grid. In light of these challenges, vehicle-to-grid (V2G) has been proposed as a form of flexible load and decentralized energy storage. Within a V2G framework, grid-connected electric vehicles provide services to power grids, for example by shifting when they charge or discharging their batteries to the grid when power demand is high. Conceptually, V2G can reduce the costs of intermittency, facilitate renewables growth, and provide storage services to the grid.&#13;
&#13;
While V2G continues to evolve and gain market traction, there remain several aspects of the technology, both operational and economic, that stand to be better understood and improved upon to best facilitate widespread adoption. For instance, EVs can theoretically displace stationary energy storage, but to what extent? What are demand side implications for the grid? For early technology adopters, particularly commercial fleets, how do travel needs and network tariffs affect V2G revenues? How can one practically simulate V2G and other service outcomes and do the potential revenues justify initial investment? &#13;
&#13;
This thesis addresses such questions and concerns through the development and application of methods that (1) quantify the technology's ultimate value proposition at the systems level, and (2) enable risk-informed market participation and financial analysis.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159922</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Globalizing “Humanism”: A Comparative Framework for Understanding Ethical and Literary Revivals Across Eurasia</title>
<link>https://hdl.handle.net/1721.1/159921</link>
<description>Globalizing “Humanism”: A Comparative Framework for Understanding Ethical and Literary Revivals Across Eurasia
Chen, Jason
Humanism is often framed through the lens of Renaissance Europe, where classical revival and secular individualism defined a powerful cultural ideal. Yet similar movements—grounded in ethical self-cultivation, textual engagement, and educational reform—have emerged independently across diverse historical contexts. This study offers a comparative framework for understanding four such revivals: Confucian classicism in Tang and Song China, Byzantine paideia under the Palaiologos dynasty, Renaissance Italy’s metaphysical humanism, and the Arabic Nahda’s reformist thought in the colonial age. Each reflects a distinctive negotiation between inherited tradition, moral agency, and sociopolitical upheaval. Five core features recur across these cases: a belief in human ethical potential, reverence for classical texts, dialogue with religious orthodoxy, institutional mediation, and the emergence of a learned elite committed to public responsibility. Beginning with China and moving westward, the analysis disrupts conventional genealogies and recasts humanism as a plural, adaptive phenomenon. Figures such as Han Yu, Liu Zongyuan, Theodore Metochites, Giovanni Pico della Mirandola, and Rifa’a al- Tahtawi exemplify the variety of humanistic expression, each articulating a vision of ethical renewal suited to their cultural moment. Rather than advancing a fixed definition, the project treats humanism as a historically contingent mode of reflection on what it means to be human—one that emerges in response to crises of meaning, legitimacy, and identity. Across Eurasia, literary and intellectual revivals have served as means for societies to reimagine moral authority, reassert cultural identity, and envision more just forms of life. Reconsidering humanism in this way not only recovers overlooked traditions but enriches the vocabulary available for confronting contemporary challenges.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159921</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence for System Medicine: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/159920</link>
<description>Artificial Intelligence for System Medicine: Methods and Applications
Ma, Yu
Modern medicine is facing a fundamental shift with the increasing availability of large-scale electronic health data and artificial intelligence-based technologies. In particular, integration across different patient characteristics to optimize, learn, and plan simultaneously across multiple medical tasks of interest, or what we call system medicine, provides opportunities for clinical and operational systems to improve disease diagnosis, operational efficiency, and, most importantly, clinical understanding. This thesis aims to develop and validate novel methods using artificial intelligence and optimization to address challenges faced in this domain. &#13;
&#13;
We introduce general-purpose artificial intelligence frameworks in Part 1. First, we introduce Holistic Artificial Intelligence in Medicine (HAIM), an integrated pipeline that combines multimodal data ranging from tabular, time-series, vision, and language into a single framework for downstream task learning. We then develop Multimodal Multitask Machine Learning for Healthcare (M3H), an end-to-end, many-to-many framework that joins the learning of multimodal data with a diverse set of medical and machine learning problem tasks. This work proposed a novel attention mechanism as well as a new explainability metric that extends previous works on the evaluation of input space contributions (features) to the output space (outcomes). These works are actively being incorporated to improve diagnosis in cardiovascular and oncology studies using ECG and multi-omics data. &#13;
&#13;
We then address real-world adoption concerns to design responsible machine learning models using optimization in Part 2. We first introduce robust regression under averaged uncertainty, which yields exact, closed-form, and analytical solutions that recover ridge regression. We show insight into how the geometric properties of the uncertainty set are closely linked to the regularization strength of the equivalent ridge regression. We then proposed an adaptive, data-driven approach for personalized breast cancer screening scheduling, which integrates an ML-based survival prediction model and a stochastic optimization formulation that balances screening delay and screening frequency. &#13;
&#13;
Finally, we apply predictive and prescriptive analytic methods to improve general medical outcomes in Part 3 and Part 4, respectively. These studies range from oncology, trauma, cardiovascular, and logistics planning. In Part 3, we aim to develop models that can most accurately learn the outcome. We show that predictive methods across different machine learning methodologies, including deep neural networks for computer vision tasks, tree-based models (including Optimal Classification Trees and gradient boosted trees), can significantly improve over either existing benchmarks or achieve comparable performances with manual physician practice. In Part 4, we delve into prescriptive analytics, which focuses on assigning the optimal treatment or other clinical decision to achieve the best outcome. We apply the interpretable Optimal Policy Trees methodology across oncology and trauma settings and observe improved medical outcomes (i.e., mortality rate).
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159920</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Putting Lipstick on a PIG: Modeling Pine Island Glacier (PIG) Shear Margin Collapse with Compressive Arch Failure and Observations</title>
<link>https://hdl.handle.net/1721.1/159919</link>
<description>Putting Lipstick on a PIG: Modeling Pine Island Glacier (PIG) Shear Margin Collapse with Compressive Arch Failure and Observations
Wells-Moran, Sarah
Pine Island Glacier (PIG) drains 10\% of the West Antarctic Ice Sheet and has undergone rapid change in the observational record, contributing to uncertainty in sea level rise projections. The Pine Island Ice Shelf (PIIS), which provides a key buttressing force that slows the flux of ice across the grounding line, has accelerated 800 m/yr (an approximate 20\% increase in speed) between 2015 and 2024, accompanied by a visible increase in damage in the Southern shear margin, indicating a partial loss of buttressing. We examine this loss of buttressing to determine the mechanisms through which ice shelves collapse. Buttressing allows an ice shelf to increase in thickness to a point at which the stresses within the ice would exceed the tensile yield strength without the compression provided by buttressing. Following the Compressive Arch Theory proposed by \textcite{doake_breakup_1998}, we hypothesize that when a calving event decouples the ice shelf from a buttressing region, the thicker ice shelf is thrown into tension and rapidly collapses, as happened with the Larsen B Ice Shelf in 2002. We use the Ice-sheet and Sea-level System Model to investigate the instantaneous stress response to loss in buttressing on an idealized glacier, with the goal of finding the changes in shear margin buttressing that most accurately recreate observed changes. In our model, we are only able to replicate observed changes in stress regime by decoupling both shear margins, suggesting the PIIS is currently providing negligible buttressing, allowing PIG to accelerate, thin, and retreat. We construct a timeline of shear margin evolution and collapse over the PIIS from 2015 to 2024 using model outputs of stress field response to changes in buttressing, coupled with observed changes in velocity, effective and principal strain rates, and calving events. Despite losing buttressing from both shear margins, the PIIS is still intact, contrary to our initial hypothesis on compressive arch failure. We re-frame Compressive Arch Theory to better capture the timescales involved in loss of buttressing. We posit that compressive arch failure from loss of buttressing on short time scales leads to rapid ice shelf disintegration, whereas compressive arch failure occurring on longer time scales allows the ice to viscously relax, leading to ice shelf thinning instead of collapse. This new framework for investigating loss of buttressing allows us to better assess the stability of ice shelves and more accurately model future Antarctic contributions to sea level rise.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159919</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integration of Zip-formwork and conventional formwork systems for shape-optimized concrete in large scale construction</title>
<link>https://hdl.handle.net/1721.1/159918</link>
<description>Integration of Zip-formwork and conventional formwork systems for shape-optimized concrete in large scale construction
Zhuang, Yingjia
Cast-in-place concrete production plays a dominant role in the architecture, engineering and construction (AEC) industry, particularly in large-scale projects, contributing significantly to global material consumption, construction costs, and embodied carbon emissions. Shape optimized concrete has been developed as a solution for more affordable and sustainable construction using less material to create efficient structures that meet structural demands. Although extensive research and development has focused on applying shape optimization to prismatic concrete beams, these beams are often limited by the constraints of available formwork and are primarily designed as pre-cast components. This paper presents the results of optimizing the Zip-Form, a digitally fabricated formwork system made from mild steel, designed for forming shape-optimized concrete beams, and its integration with conventional formwork equipment. The study evaluates the structural performance, embodied carbon, and cost of the Zip-Form integrated system in comparison to a traditional formwork platform used for prismatic beams. The findings highlight the Zip-Form’s potential for forming shape-optimized concrete beams using cast-in-place methods, making it a viable solution for sustainable large-scale construction projects in the current industry. The methodology outlined in this thesis provides a comprehensive design process, beginning with the structural design of the shape-optimized&#13;
concrete beams, followed by the design of the Zip-Form integrated formwork system to cast the beams, and concluding with an embodied carbon and cost analysis to evaluate the environmental and financial benefits. This thesis aims to bridge academic research and innovation with practical, real-world applications.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159918</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lateral Transfer of DNA in Protocell-like Synthetic Cells</title>
<link>https://hdl.handle.net/1721.1/159917</link>
<description>Lateral Transfer of DNA in Protocell-like Synthetic Cells
Gray, Ryan J.
Understanding how molecules could have moved between primitive cells is a central problem in Astrobiology and Geobiology. This thesis investigates whether electroporation can mediate the transfer of DNA into and between synthetic cells, potentially enabling gene expression in initially DNA-free compartments. Both TXTL and PURE-based cell-free expression systems were encapsulated in lipid in order to evaluate fluorescence as a proxy for GFP expression following electroporation across varying voltages and pulse numbers. In TXTL-based systems, increased fluorescence in electroporated conditions relative to controls supported the feasibility of environmental DNA uptake. PURE-based systems displayed similar trends, though variability in baseline fluorescence and fold-fluorescence complicated interpretation. In experiments designed to model lateral gene transfer (LGT) between synthetic donor and acceptor vesicles, modest fold changes in GFP expression were observed, particularly after multiple electroporation rounds, suggesting limited but detectable DNA transfer between vesicles. While microscopy provided some support for internal expression, its resolution and interpretability were strongly limited. Altogether, these findings support the concept that electroporation-like events such as those generated by lightning on early earth Earth could have promoted the horizontal movement of genetic material among protocells. Additionally, this work highlights key experimental challenges in modeling prebiotic genetic exchange, while also contributing to the development of synthetic biological systems that emulate early evolutionary processes.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159917</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Environmental Tritium Cycle for Fusion Energy&#13;
and National Security</title>
<link>https://hdl.handle.net/1721.1/159916</link>
<description>Analyzing Environmental Tritium Cycle for Fusion Energy&#13;
and National Security
Arias, Liliana R.
Although tritium is a sought-after isotope of hydrogen for fusion fuel, it is important to consider the environmental impacts of its release into the environment. In order to prepare for the elevated tritium releases that may result from commercial fusion power, yearly tritium releases from different types of nuclear facilities are compiled, with an emphasis on fusion reactors. Atmospheric modeling using HYSPLIT and a Gaussian Plume model is then conducted in order to better understand current and future global tritium sources and concentrations and their release pathways in the environment. Despite elevated tritium levels near major sources, most emissions remained within regulatory bounds, although proximity to facilities still matters.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159916</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transient Grating Spectroscopy: Compact System Geometry Developments and Improved Software</title>
<link>https://hdl.handle.net/1721.1/159915</link>
<description>Transient Grating Spectroscopy: Compact System Geometry Developments and Improved Software
Rajagopal, Jonas A.
Transient grating spectroscopy (TGS) is a rapid, non-destructive technique for measuring the thermal, elastic, and acoustic properties of the top several microns of a reflective surface. It has uses across many areas of materials research. Current TGS systems require complex optics tables taking up cumbersome amounts of space, restricting TGS to a predominantly lab-based method. This thesis first outlines a new design for TGS systems: an asymmetric probe, planar (APP) geometry, which enables TGS to be shrunk and simplified, lowering the barrier to entry and allowing for wider adoption in labs and industry. This Mini-TGS system was benchmarked against an existing system on a single-crystal tungsten sample, showing it produces the same SAW frequency as the benchmark system. The design enables TGS to be more widely adopted for use in more varied and compact environments because of its smaller size and simplicity. This thesis then outlines a study of reactor pressure vessel (RPV) coupons aimed at further understanding how properties evolve as a function of time in a reactor, as a step towards demonstrating that TGS can reliably detect if an RPV is fit for service. Ultimately, this work unveiled problems in the TGS fitting code. Lastly, this thesis details the software changes to the general TGS fitting code made to improve the fitting code in response to the RPV study.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159915</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidation of gene clusters underlying withanolide biosynthesis in ashwagandha</title>
<link>https://hdl.handle.net/1721.1/159914</link>
<description>Elucidation of gene clusters underlying withanolide biosynthesis in ashwagandha
Reynolds, Erin E.
Withanolides are medicinally important steroidal lactones produced by Withania somnifera (ashwagandha) amongst other Solanaceae family plants, known for their anti-inflammatory, anti-cancer, and adaptogenic properties. However, the biosynthetic pathway to withanolides is largely unknown, preventing scale-up and hindering pharmaceutical applications. In this thesis, we report a chromosome-scale assembly of the W. somnifera genome, which we use for biosynthetic gene cluster mining. We identify two biosynthetic gene clusters likely involved in withanolide biosynthesis and explore some aspects of their evolution. The identified clusters are among the largest identified in plants to date and they exhibit an unusual tissue-specific subcluster structure. Next, we characterize the genes in the identified biosynthetic gene clusters using heterologous expression in yeast and tobacco, in conjunction with in vitro enzyme assays. We discover two cytochromes P450 (CYP87G1 and CYP749B2) and a short-chain dehydrogenase (SDH2) responsible for formation of the lactone ring on the sterol side chain, a key chemical feature of withanolides. Two additional P450s (CYP88C7 and CYP88C10) and a sulfotransferase (SULF1) generate the characteristic A-ring structure of withanolides, featuring a C₁ ketone and C₂-C₃ unsaturation. The discovery of SULF1 as a core withanolide pathway enzyme challenges the conventional view of sulfotransferases as tailoring enzymes and suggests a wider role for this enzyme family in plant secondary metabolism. This work opens new avenues for the sustainable production of withanolides through biomanufacturing and for drug development leveraging the withanolide scaffold.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159914</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging System-Level Analyses and Techno-Economic Modeling to Inform the Viability of Electrochemically-Mediated CO₂ Separation</title>
<link>https://hdl.handle.net/1721.1/159913</link>
<description>Leveraging System-Level Analyses and Techno-Economic Modeling to Inform the Viability of Electrochemically-Mediated CO₂ Separation
Ripley-Kenyon, Katelyn M.
Replacing fossil fuels with renewable energy and removing carbon dioxide (CO₂) via carbon capture, utilization, and storage (CCUS) are essential strategies for addressing the climate crisis and achieving net-zero emissions by 2050. While renewable energy is predicted to supply more than 60% of net electricity generation in the United States by mid-century, it is also predicted that coal and natural gas plants will remain operational in the near term to meet growing energy demands. This, coupled with the persistence of hard-to-decarbonize processes, requires point-source capture technologies to mitigate remaining CO₂ emissions. State-of-the-art CO₂ separation systems are typically based on low efficiency, temperature-swing cycles that exploit the natural affinity of alkanolamines for CO₂ at ambient conditions. Alternatively, electrochemical capture systems may enable CO₂ removal from flue gas streams at higher energetic efficiencies while also offering more modular and scalable designs. However, direct comparisons between the thermochemical and electrochemical approaches are scant, likely due to the nascency of the latter.&#13;
&#13;
In this thesis, I develop modeling frameworks that enable system-level comparisons of two types of electrochemical CO₂ capture (eCCC) technologies and the incumbent thermochemical, amine-based capture technologies. I first start by developing a reactive absorption model to predict the absorption column sizes required in “4–stage” eCCC systems (i.e., comprising of an electrochemical reactor, absorption column, and flash tank). I use the model to inform operating conditions and molecular properties that will allow these processes to utilize columns that are comparable in size to those presently deployed in thermochemical systems. While this helps address capital cost comparisons, to couple these effects with operating costs I next combine the absorption column model with an electrochemical cell model to predict the levelized cost of capture (LCOC) of the capture platforms at a coal pilot plant facility. This techno-economic model allows for thorough investigation of the property sets, operating conditions, and target cost factors that will lead to conditions where the electrochemical systems can compete economically with amine scrubbing systems. Next, this in-house model is used to probe the effects of scale and flue gas composition on the overall LCOC to provide commentary on the conditions and costs likely for operation at commercial-scale plants. Finally, I apply my knowledge of decarbonization efforts to inform realistic pathways for decarbonizing cement production facilities in the near-term. Ultimately, the goal of this thesis is to lay the foundation for quantitative comparisons between different technologies available for point-source capture applications while also offering models that can be used to investigate the viability of promising molecules and electrolytes in eCCC.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159913</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uniqueness problems in mean curvature flow</title>
<link>https://hdl.handle.net/1721.1/159912</link>
<description>Uniqueness problems in mean curvature flow
Lee, Tang-Kai
We investigate uniqueness phenomena in mean curvature flow, focusing on two central problems: the behavior of the flow near singularities and the extension of the flow beyond singular times. These questions have significant applications in geometry, topology, and analysis. For the first problem, with Jingze Zhu, we formulate a canonical way to study the limit model near a singularity of a generic closed mean curvature flow of surfaces. Using this framework, we establish a uniqueness result for singularity models. As a consequence, we resolve a uniqueness problem for gradient flow lines in ordinary differential equation theory, related to a question posed by Thom and Arnold, and revisited by Colding–Minicozzi. For the second problem, with Alec Payne, we examine the level set flow as a weak formulation that ensures long-time existence and uniqueness of mean curvature flow past singularities. This approach, however, can lead to fattening, a phenomenon reflecting genuine non-uniqueness of the extended flow. While genuine uniqueness cannot always be expected, we address this challenge by establishing an intersection principle for comparing two intersecting flows. We prove that level set flows satisfy this principle in the absence of non-uniqueness. Finally, with Larry Guth, we explore a problem concerning homotopy classes of maps between spheres. Recent progress on this problem relies on delicate analysis of high codimensional graphical mean curvature flow. We use a direct method to refine a homotopy criterion for maps between low-dimensional spheres.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159912</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Steenrod operations and Fukaya categories</title>
<link>https://hdl.handle.net/1721.1/159911</link>
<description>Quantum Steenrod operations and Fukaya categories
Chen, Zihong
The recent introduction of mod p equivariant operations to symplectic Gromov-Witten theory has fueled exciting developments in the field. In this thesis, we develop new tools for understanding these operations and explore an application to the quantum connection. In one direction, we construct certain operations on the equivariant Hochschild (co)homology of a general A∞-category. We show that when applied to the Fukaya category of a nondegenerated closed monotone symplectic manifold, this construction can be identified with the quantum Steenrod operations via Ganatra’s cyclic open-closed maps. A key ingredient in this identification is a new homotopy theoretic framework for studying various equivariant open-closed maps at once, using a combination of cyclic categories, edgewise subdivision and Abouzaid-Groman-Varolgunes’ operadic Floer theory. In another direction, we utilize quantum Steenrod operations, and Lee’s observation that it is related to the p-curvature of the quantum connection, to study singularities of the quantum connection in characteristic 0, and prove the exponential type conjecture for all closed monotone symplectic manifolds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159911</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organic influences on hydrated magnesium carbonate mineral formation</title>
<link>https://hdl.handle.net/1721.1/159910</link>
<description>Organic influences on hydrated magnesium carbonate mineral formation
Baldes, Matthew J.
Carbonate minerals retain organic compounds and preserve textural and chemical evidence of microbial activity early in the geologic record of Earth. For this reason, magnesium carbonates thought to be associated with lacustrine deposits in Jezero Crater are an important target of the Mars Sample Return Mission. The presence of hydrated magnesium carbonates in the deposits suggests that these minerals experienced minimal postdepositional alteration and may have the potential to preserve biosignatures from a habitable early martian environment. Microbial influences on calcium carbonate precipitation are well documented, but magnesium carbonates have received considerably less attention as a result of their relative scarcity in terrestrial deposits. The few modern lacustrine environments where hydrated magnesium carbonate minerals form have been proposed as analogs for Jezero Crater. Precipitation often occurs in association with microbial communities in these alkaline lake systems, but little is known about the potential for hydrated magnesium carbonates to preserve biosignatures, especially in depositional environments analogous to the carbonate sediments and coatings identified by the Perseverance Rover in Jezero Crater. This thesis explores organic influences on hydrated magnesium carbonate precipitation and the potential for these minerals to retain evidence of microbial activity. I begin by culturing cyanobacterial biofilms in solutions that replicate natural lacustrine environments where hydrated magnesium carbonate precipitation occurs. I designed experiments to isolate the role of cyanobacterial extracellular polymeric substances (EPS) in mediating the mineralogy of hydrated magnesium carbonate precipitates and rate of amorphous magnesium carbonate (AMC) maturation. I also compared the precipitates that formed in association with cyanobacterial biofilms to those formed under inorganic conditions to determine if hydrated magnesium carbonates preserve biosignatures. The results from these experiments demonstrate that cyanobacterial EPS promotes the early stabilization of the hydrated magnesium carbonate mineral dypingite and that biologically associated precipitates encapsulate cells and retain organic compounds detectable with Raman spectroscopy. I complement this laboratory work by seeking to identify similar spectroscopic and textural evidence of microbial activity in a range of carbonate deposits from Lake Salda, Türkiye including sands, crusts on coarse siliciclastic sediments, and alteration veins in serpentinized ultramafic bedrock. Analyses of these samples revealed that hydromagnesite sands and crusts have a higher potential to preserve biosignatures than dolomite veins in a system analogous to Jezero Crater.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159910</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A probabilistic perspective on graph coloring</title>
<link>https://hdl.handle.net/1721.1/159909</link>
<description>A probabilistic perspective on graph coloring
Mani, Nitya
Graph coloring is perhaps the most fundamental, deeply-studied, and well-known area in graph theory, with many of the most basic questions in the field still widely open. Graph coloring questions often have wide ranging applications across fields as diverse as statistical physics, theoretical computer science, route planning, disease spread, cybersecurity, circuit design, and network science more broadly. This thesis studies graph coloring from a probabilist’s perspective, focusing on graph coloring problems that share an underlying theme: given an exponentially large family of objects derived from a graph vertex-coloring, can we understand what a typical or random object in this large family looks like without manually searching through exponentially many alternatives? The majority of this thesis is centered around two basic graph coloring problems, each of which has been heavily studied and comes with a rich history and many applications. We begin this thesis by establishing a fourth moment phenomenon for the number of monochromatic copies of any fixed subgraph in a given graph sequence (when given at least eight colors). We also study, and in many special cases, characterize, failures of a fourth moment phenomenon to hold in the two-color regime. We then continue to our second major topic of study. We essentially resolve a folklore conjecture about the uniform distribution of proper colorings of a bounded-degree tree. As a consequence, we are able to make significant progress towards a longstanding conjecture in the statistical physics community and one of the oldest and most basic still-open questions in the field of approximate counting and sampling. We also disprove the efficacy of a particular, popular approach to tackling this pair of conjectures. Finally, we conclude the thesis by taking a different approach to studying typical samples from exponentially large families, applying the graph container method to study two coloring-adjacent questions: upper bounding the number of error correcting codes and understanding the structure of typical unit-distance avoiding sets in R².
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159909</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Bridging and Governing Decentralized Communities</title>
<link>https://hdl.handle.net/1721.1/159908</link>
<description>Towards Bridging and Governing Decentralized Communities
Saldías Fuentes, Belén Carolina
"Unless the spaces in a building are arranged in a sequence which corresponds to their degrees of privacy the visits made by strangers, friends, guests, clients, family, will always be a little awkward." (Alexander, 1977) — Unlike physical spaces, where we can move seamlessly between different environments with varying degrees of privacy, much of our online experience occurs in noisy, crowded, and imposed public areas. This can undermine meaningful engagement, deepen social divides, and exacerbate anxiety, polarization, and distrust arising from unnecessary friction and misunderstandings. Moreover, while different communities with distinct values and norms often share these same public venues, they are typically subject to one-size-fits-all policies that fail to address local contexts. Consequently, toxic behavior is policed at the platform level rather than by the communities themselves, leading to oversimplified governance solutions that favor some communities while silencing others.&#13;
&#13;
Fortunately, emerging strategies in decentralized protocols and networks have begun to change this dynamic. Decentralized systems designed for local governance can empower communities to create more nuanced and context-sensitive rules. However, these approaches remain largely inaccessible to non-technical users and risk creating a "paradox of decentralization," wherein isolated servers or communities potentially deepen echo chambers. This thesis contends that by placing community governance and user agency at the center of online platforms—and by leveraging advances in large language models (LLMs)—we can build healthier digital spaces that foster pro-social interactions while respecting individual groups' autonomy.&#13;
&#13;
To explore these possibilities, this dissertation examines how intentional design principles can promote constructive communication in decentralized contexts. First, it presents a large-scale historical Reddit dataset encompassing over 230K removed posts across more than 19K mission-defined communities—that captures a diverse range of speech, community norms, and moderation approaches. By analyzing over 60K community rules, I propose an empirically grounded norms schema and reveal how the purpose statement correlates with pro-social behavior reflected in community-centered discourse.&#13;
&#13;
Building on these insights, the dissertation next tackles the challenge of shifting from centralized, top-down moderation to distributed, community-specific content governance. While centralized methods provide highly generalizable moderation powered by advanced AI, they hinder specificity and community-specific definitions of behavior, limiting community and user participation in shaping how their content is moderated and ranked. I prototype and evaluate tools for (i) explainable, decentralized content moderation—where interpretable models illuminate why a post is flagged or removed—and (ii) surfacing unspoken differences in the definitions and understanding of seemingly similar norms across communities. These prototypes show how LLMs can assist by clarifying value mismatches, supporting local decision-making, and enabling communities to mediate misunderstandings across divides.&#13;
&#13;
Finally, I consolidate these findings in a real-world social network platform called Odessa—a DEcentralized Social Systems App—deliberately designed as a user-friendly, decentralized environment that allows communities to define—and iteratively refine—their own norms, moderation, ranking algorithms, and, more generally, governance strategies. Through system deployment and user experiments, I investigate how participants navigate local governance controls and interact within bridged spaces across communities. Odessa's bridging mechanisms illustrate how communities can preserve distinct values without sacrificing cross-community connections. By open-sourcing Odessa, I provide a framework for researchers and practitioners to test human-AI partnerships in governance and a learning environment for apprentices. The results presented here underscore both the opportunities and challenges in democratizing content moderation, highlighting the pivotal role of transparent AI in promoting trust and mutual understanding.&#13;
&#13;
This dissertation makes the case that future social media ecosystems should emphasize bottom-up, community-driven governance aided by interpretable AI tools. By enabling communities to shape their social expectations through purpose and norms, explain decisions through transparent AI and access to human rationales, and forge connections with other communities, we can cultivate online environments where pro-social discourse thrives. In doing so, we move beyond merely "fighting toxicity" toward intentionally designing spaces that support constructive dialogue and genuine community development.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159908</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Optimal and Approximate Algorithms in Optimization Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/159907</link>
<description>Efficient Optimal and Approximate Algorithms in Optimization Under Uncertainty
Gonzalez, Victor
Some of the most important and challenging decisions must be made with incomplete information. This lack of information can refer to ignorance regarding events or conditions that have happened or events that have yet to occur. This lack of information can relate to what decisions are available as well as the consequences of these decisions. Optimization under uncertainty has applications in a wide range of settings including hiring (How good is the candidate? Will a better candidate arrive?), disaster relief (Where are people who need help? How long do rescue teams have before they are in a more critical condition?), and manufacturing (How much will we be able to manufacture? What orders should we accept?). These problems can be solved naively by reformulating the problem as a deterministic problem, but this can dramatically increase the size of the problem making the naive reformulation to be computationally expensive to solve. We aim to develop efficient algorithms to solve optimization problems under uncertainty and construct approximate algorithms that quickly approximate the solution in instances that are too large to solve exactly. In Chapter 2, we discuss a secretary problem with generalized decisions. The goal is to “rank” incoming items arriving in an uncertain order. Once an item arrives, it must be assigned a rank before the next item arrives, and this cannot be changed when new items arrive. We exploit the structure of the problem using exact dynamic programming to construct an algorithm that can be used to compute an exact solution. Additionally, we develop heuristics that can be used to construct an approximate solution in larger problems. In Chapter 3, we discuss a search and rescue drone problem. In this problem, we construct routes to maximize the number of people who are in need of rescue in the event of a natural disaster. After constructing a nonlinear mixed-integer program (MIP), we construct simplifying policies that allow us to solve it in real-time allowing for updates as new information is learned. In Chapter 4, we discuss a two-stage stochastic knapsack problem. In manufacturing settings, orders must be accepted or rejected before it is known how much resources will be available to fill those orders. We formalize this problem as a two-stage stochastic knapsack problem. We construct lower bounds based on feasible solutions and upper bounds based on the optimal solutions of a relaxation of the problem. We then use these bounds to construct optimal solutions that beat traditional solvers. We then develop algorithms that construct approximate solutions for larger instances that perform well compared to the optimal solutions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159907</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Adaptive Robust Optimization Approach to Electricity Markets Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/159906</link>
<description>A Unified Adaptive Robust Optimization Approach to Electricity Markets Under Uncertainty
Koulouras, Angelos Georgio
Electricity grid operations in the US rely heavily on two short-term markets: the DayAhead Market (DAM) and the Real-Time Market (RTM). Although the DAM is the cornerstone of electricity markets, it does not adapt to the fast-changing reality in the grid, such as the increased uncertainties due to renewables. Therefore, the existing deterministic market suffers from inherent uncertainties and creates inefficiencies, which require ad hoc and suboptimal solutions, like out-of-market interventions by the grid operators. To address these issues, in this thesis, we advocate for an adaptive mindset in electricity markets and propose a unified and adaptive redesign of the DAM. The proposed market cooptimizes the existing DAM and out-of-market processes, like the Reliability Unit Commitment (RUC), under adaptive robust optimization (ARO). Through ARO, we explicitly procure and price flexibility using adaptive reserve products that provide generation plans contingent on the uncertainty in the RTM. The grid uncertainty is captured through uncertainty sets that contain all the scenarios against which the market operator hedges, while it is priced through new marginal pricing mechanisms. In Chapter 2, we provide marginal pricing for uncertainty in ARO as a technical enabler of the proposed market. We derive locational marginal prices for unit commitment problems with ARO under load and capacity uncertainty and provide guarantees on the participant incentives under worst-case uncertainty. These pricing mechanisms are then used in Chapter 3, which features the redesign of the DAM. Specifically, the proposed DAM eliminates RUC-like processes by introducing deterministic reserve products that were previously procured in a nontransparent way by the market operators. It also hedges against load forecast errors by using adaptive reserve products that reward participant flexibility. The overall design, which is also applied to ISO New England market data, increases the social welfare and reliability in the market and reduces the arbitrage opportunities. Finally, in Chapter 4, we provide data-driven uncertainty calibration methods for the proposed market. We determine the size of the uncertainty set using machine learning models and mixed-integer optimization, leveraging historical data that consist of covariates or features. This method has been successfully applied to wind generation forecasts from a vendor that caters to a large US grid operator.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159906</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oceans, like highways</title>
<link>https://hdl.handle.net/1721.1/159905</link>
<description>Oceans, like highways
Kang, Emily
This coming-of-age screenplay of loss, forced confrontation of one’s past, and self-discovery tracks a young female protagonist who runs away from a life in her family’s touring circus. Her story begins in a vivid, unorthodox environment and moves into a more mundane setting as she builds a life of her own, navigating the world outside the circus and away from her family. Through a series of trial and error, she pursues a career in journalism, seeking to honestly tell the stories of others. Eventually, she is presented with an opportunity that seems perfect in almost all dimension— except this commission requires her to confront her own story rather than deferring to those of others. The thesis aims to explore a story of emergence from the enclosed bubble of one environment to the entrance into a reality where only some of the prior rules now apply. The protagonist explores the question of how to reconcile with one’s past when forced to, despite her best attempts to avoid doing so. Through the many possible lenses to think about a past lifetime— nostalgia, gratitude, regret, among so many others— this story grapples with who we are in the midst of leaving everything we know behind, and how we process our past experiences while in a new stage of our lives.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159905</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Mesoscale Eddies: the Effects of Resolution onOcean Turbulence</title>
<link>https://hdl.handle.net/1721.1/159904</link>
<description>Modeling Mesoscale Eddies: the Effects of Resolution onOcean Turbulence
Brock, Lucy
We describe a non-adiabatic idealized model for studying mesoscale turbulence in the global ocean. Using the ocean model Oceananigans, we perform a grid refinement study to determine the minimal resolution required to represent mesoscale eddies in the primitive equations. Convergence is evaluated through several metrics, including surface and depth-integrated kinetic energy, spectra, and zonally-averaged temperature, in order to establish quantitative resolution thresholds for physical fidelity. We find that while coarse-resolution simulations capture large-scale flow features, key mesoscale dynamics—including vertical stratification gradients and kinetic energy spectra—only converge at resolutions finer than 1/4°. Differences between the 1/8° and 1/16° simulations are small, suggesting that 1/8° resolution may be sufficient for resolving the mesoscale eddy field for many diagnostic purposes in idealized setups.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159904</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Triple oxygen isotope measurements in chert: insights into&#13;
the Snowball Earth glaciations</title>
<link>https://hdl.handle.net/1721.1/159903</link>
<description>Triple oxygen isotope measurements in chert: insights into&#13;
the Snowball Earth glaciations
Freudenburg-Puricelli, Markey
Snowball Earth events represent a critical component of the history of the planet, particularly for the trajectory of life, atmospheric oxygen, and planetary habitability. There remains a myriad of questions about the dynamics of these global glaciations, especially regarding the relationship between the cryosphere and hydrosphere during this time. This study analyzes silica precipitates within a carbonate sequence immediately underlying a Cryogenian diamictite to better understand this relationship, particularly the chemistry of subglacial meltwater. Using triple oxygen isotope measurements, clumped isotope palaeothermometry, uranium-lead geochronology, and SEM/EDS and XRD analyses, we present interpretations of both the host rock and possible scenarios for the geochemistry of the precipitating fluid(s) responsible for these silica cements. We posit that these cherts are precipitates either from syn-glacial, sub-ice meltwaters or deglacial fluids from the end of the Marinoan glaciation, providing useful insights into the chemical composition of these source waters and demonstrating the utility of chemical precipitates as a record of ancient sub-ice conditions.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159903</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Method for Recharge and Sustainable Groundwater Use Estimation Using Soil Moisture Time-Series Analysis</title>
<link>https://hdl.handle.net/1721.1/159902</link>
<description>Method for Recharge and Sustainable Groundwater Use Estimation Using Soil Moisture Time-Series Analysis
Kummel, Kathryn
Accurately estimating groundwater recharge is essential for sustainable aquifer management, yet remains difficult to measure directly, especially at large spatial scales. This thesis presents a method to estimate recharge using soil moisture drydown dynamics—the period following precipitation during which soil water is lost to evaporation and drainage. A two-component continuous loss function was developed to model this moisture loss, separating evaporation and drainage processes in a physically consistent, mathematically tractable framework. Using high temporal resolution in situ data from the ARM Southern Great Plains site, the method was calibrated and validated, showing that daily or sub-daily data capture the full shape of the loss function. A key outcome was the derivation of a hydrologic length scale (λ) from precipitation and soil moisture data, enabling conversion of unitless drainage estimates into physically meaningful recharge fluxes (mm/day). The methodology was then applied to satellite-based soil moisture data from the SMAP mission and the combined SMAPSMOS product. Despite resolution and noise limitations, these datasets produced reasonable drainage estimates, and the combined product showed particular promise for capturing drydowns at global scale. The findings demonstrate that soil moisture observations—when analyzed with appropriate temporal resolution and physical modeling—can provide a scalable, remote sensing-based approach to estimating groundwater recharge worldwide.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159902</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Non-Rotational Ocean Circulation and Heat Distribution in Icy Moons</title>
<link>https://hdl.handle.net/1721.1/159901</link>
<description>Modeling Non-Rotational Ocean Circulation and Heat Distribution in Icy Moons
Nath, Anika
Subsurface oceans beneath the ice shells of icy moons like Europa and Enceladus are considered promising environments for extraterrestrial life. Their long-term habitability depends on internal heating and efficient vertical heat transport to maintain liquid water beneath the surface. This study models vertical heat diffusion in a non-rotating ocean column to investigate thermal structure and energy balance in such systems. A one-dimensional numerical simulation was developed using temperature-dependent thermal conductivity and fixed Dirichlet boundary conditions, initialized with a linear temperature gradient from −10 K at the surface to +10 K at the base. Over 1000 time steps, the temperature profile became nonlinear, with a kink indicating the transition from ice to water. Despite fixed boundary temperatures, the interior warmed, and the average temperature rose to 2.84 K. This resulted from asymmetric conductivity: efficient heating from below and slow heat loss through the upper ice. These results illustrate how conductivity structure controls thermal evolution and ice shell stability on ocean worlds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159901</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monitoring Anthropogenic Carbon in Cape Cod Bay</title>
<link>https://hdl.handle.net/1721.1/159900</link>
<description>Monitoring Anthropogenic Carbon in Cape Cod Bay
Neithardt, Daina M.
Coastal oceans are diverse regions which are highly important to human activity, coastal ecosystems, and carbon uptake. Parameters such as pH on the total scale (pHₜ), Dissolved Inorganic Carbon (DIC), and Total Alkalinity (TA) contribute to understanding the health of coastal waters such as Cape Cod Bay, yet are resource-intensive to measure and have little historical data in Cape Cod Bay. Seawater collected from Cape Cod Bay was analyzed for pHₜ, DIC, and TA and compared to historic data from the region. A multi-linear-regression was performed to create a model to predict the measured parameters of the carbon system. Predicted TA accurately matched the measured values for the open water of the bay, while performing less accurately for near-coast samples. DIC could be predicted for the open water, although not to the same degree of precision as TA, while pHₜ showed little correlation with the predictors. Additionally, analysis of historical data revealed an extensive aragonite desaturation event in Cape Cod Bay during fall 2021.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159900</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>First Visible Wavelength Lightcurves for the Northern Hemispheres of Titania and Oberon</title>
<link>https://hdl.handle.net/1721.1/159899</link>
<description>First Visible Wavelength Lightcurves for the Northern Hemispheres of Titania and Oberon
Colclasure, Abigail M.
The most recently published lightcurves of the large Uranian satellites were published in 1989 and there have been no published lightcurves of the satellites’ northern hemispheres. In this work, I present the first visible-wavelength lightcurves of the northern hemispheres of Titania and Oberon. Observations of the Uranian satellites are inherently difficult given their proximity to Uranus. Contamination from stray Uranian light is a major challenge and the background near the satellites must be well characterized. I mitigated the effects of stray Uranian light using point spread function photometry. I modelled Uranus with a Lorentzian with the same full width at half max as the stellar point spread function. I also determined that Uranus’s profile is poorly modeled with a Gaussian or with the stellar empirical point spread function. After accounting for Uranian light in this way, there remains significant correlation between the photometric measurements of Titania and Oberon. I considered what may be causing this correlation and suggest several paths forward.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159899</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Tokamak Assessment of Modeled Temperature Profiles</title>
<link>https://hdl.handle.net/1721.1/159898</link>
<description>Multi-Tokamak Assessment of Modeled Temperature Profiles
Yanna, Kaitlyn M.
This study validates the predictive capability of a newly formalized modeling workflow—referred to here as MAESTRO—developed by Dr. Pablo Rodríguez Fernández of the MIT Integrated Modeling Group by comparing simulated plasma temperature profiles with experimental data from three well-documented tokamak discharges: Holland (2011) [1], White (2014) [2], and Zagorski (2015) [3]. The validation study uses the iterative TRANSP and PORTALS transport solvers to achieve flux-matching and self-consistency between heat sources and transport. The experimental temperature and density profiles were used as a starting point for the analysis. Three different SAT rules were used (SAT3 [4], SAT2-EM [5], and SAT2-EM as implemented in ASTRA [6]) and the edge boundary conditions were perturbed ±15% to simulate experimental error. The resulting profiles were plotted against the experimental profiles to validate the model’s accuracy. The percent difference of the simulated and experimental stored energy across the three cases is calculated. The results establish confidence in MAESTRO’s predictive capabilities for predicting future tokamak performance, while identifying areas for model improvement.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159898</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the external forcing of Indian Ocean climate variability&#13;
across timescales</title>
<link>https://hdl.handle.net/1721.1/159897</link>
<description>On the external forcing of Indian Ocean climate variability&#13;
across timescales
Tiger, Benjamin H.
It is imperative to understand the dynamics of external climate forcings and the nature of the climate system’s responses for improved predictability. These forcings include low-probability, high-impact events like explosive volcanic eruptions as well as the continued injection of anthropogenic greenhouse gases into the atmosphere. This thesis explores how external forcings affect Indian Ocean climate in the past, present, and future using paleoclimate archives in conjunction with observational and climate model data. Chapter 2 presents a novel geochemical stalagmite record from northern Madagascar which spans the end of the last glacial period. Stable isotope and trace metal proxies indicate drier conditions in response to North Atlantic cooling events, such as Heinrich stadials, and wetter conditions during North Atlantic warming events, such as the Bølling–Allerød. These responses are opposite what would be expected from north-south shifts in the Intertropical Convergence Zone. Instead, we hypothesize that west-east tropical Indian Ocean temperature gradient variability akin to the modern-day Indian Ocean Dipole explains the consistent hydroclimate response to North Atlantic forcing reconstructed by eastern African sites. Chapter 3 explores the effects of volcanic eruptions on interannual Indo-Pacific climate variability using an ensemble of last millennium simulations. Following the largest tropical eruptions, these simulations demonstrate a consistent negative Indian Ocean Dipole response which leads an El Niño. This response scales with eruption intensity and persists for up to 8 years for the strongest events. We also find that Interdecadal Pacific Oscillation phasing at time of eruption preconditions the initial Indian Ocean Dipole response via low frequency thermocline depth modulation. Finally, in Chapter 4 we use marine sedimentary archives in combination with climate simulations to expand on the Atlantic-Indian Ocean teleconnection hypothesized in Chapter 2. The reconstructed west-east surface temperature gradient responds in lockstep to previous instances of Atlantic Meridional Overturning Circulation (AMOC) variability during the last glacial period, such as Heinrich stadials, the Bølling–Allerød, and the Younger Dryas. An analysis of single-forcing simulations featuring meltwater addition to the North Atlantic under glacial and interglacial boundary conditions further demonstrates this inter-basin connectivity. We find that in simulations of high greenhouse gas emission scenarios, uncertainties in future Indian Ocean temperature and precipitation patterns are attributable to uncertainties in the magnitude of future AMOC weakening. This thesis bridges disparate timescales and data sources to gain insight into how the external forcing of the Earth system works at a fundamental level, from geochemical records of abrupt climate transitions during the last ice age to numerical simulations of the Atlantic overturning slowing by the end of the 21st century.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159897</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paleo-aridity records investigated through drip water chemistry in Lehman Caves, Nevada</title>
<link>https://hdl.handle.net/1721.1/159896</link>
<description>Paleo-aridity records investigated through drip water chemistry in Lehman Caves, Nevada
Knight, Rory S.
In the Great Basin of the southwest United States, climate change is predicted to cause increased precipitation variability, making the future climate of the region uncertain. The paleoclimate record has direct examples of dramatic changes in water availability in this area, allowing for a comparison of precipitation changes and responses for the Great Basin. In Lehman Cave, Nevada, ten drips above actively-forming stalagmites were sampled monthly. Glass growth plates were also placed above three actively-forming stalagmites, allowing for the collection of new calcite growths. This project analyzed the samples for Mg/Ca, Sr/Ca, and U/Ca ratios to provide a comparison of the composition of calcite and the drip waters from which they precipitate. This will improve our understanding of the paleo-aridity of the Great Basin region, as well as provide useful context for the changing precipitation patterns expected with modern climate change.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159896</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Traversing Rugged Domains: Explorations in Non-convex&#13;
Optimization Theory and Software</title>
<link>https://hdl.handle.net/1721.1/159895</link>
<description>Traversing Rugged Domains: Explorations in Non-convex&#13;
Optimization Theory and Software
Dixit, Vaibhav Kumar
This thesis introduces theoretical and computational frameworks for nonlinear, nonconvex optimization problems in statistics, machine learning, and optimal control. Disciplined Geodesically Convex Programming (DGCP) extends convexity verification to Riemannian manifolds, enabling optimization on curved spaces with global optimality guarantees. We develop rules and atoms for Cartan-Hadamard manifolds, particularly symmetric positive definite matrices, transforming non-convex problems into tractable ones through Riemannian geometry. We also present Optimization.jl, a unified interface for diverse optimization methods that supports specialized implementations for specific problem classes. Its modular architecture integrates automatic differentiation with an extensible plugin system. The framework’s capabilities are demonstrated through a GPU-accelerated hybrid method combining Particle Swarm Optimization with L-BFGS, and an augmented Lagrangian approach with stochastic inner optimizers that connects constrained optimization with machine learning techniques. Our work combines theoretical foundations with practical implementation, providing researchers tools to use advanced optimization methods without specialized mathematical knowledge.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159895</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization and Quantification of Solid Electrolyte Interphases for Composition-Functionality Relationships at Lithium Metal Electrodes</title>
<link>https://hdl.handle.net/1721.1/159894</link>
<description>Characterization and Quantification of Solid Electrolyte Interphases for Composition-Functionality Relationships at Lithium Metal Electrodes
Steinberg, Katherine Julia
Lithium (Li) has the lowest electrochemical reduction potential and density of any metal, making it an exceptionally desirable anode material for batteries and a powerful chemical reductant. However, the reducing nature which makes Li so useful brings challenges: it is thermodynamically unstable in practical liquid electrolytes, driving the formation of a passivation film called the solid electrolyte interphase (SEI). The SEI mediates transport and reactivity at the Li surface, and in practical systems it both consumes active Li directly and leads to spatial heterogeneity in fluxes to and from the lithium surface, resulting in inefficiency in plating and stripping. Together, these effects make the SEI the most important factor determining the efficiency of Li electrochemistry, but its nanoscale, heterogeneous, and reactive nature make it extremely challenging to study experimentally. As a result, existing understanding of the impact of composition and structure on SEI functionality is limited. This thesis aims to enhance conceptual understanding in this space, combining multimodal characterization, the design and application of informative model systems, and the quantification of key phases to reveal mechanistic insights that advance understanding of composition-functionality relationships at Li interfaces. &#13;
&#13;
To begin, this work focuses on the role of the SEI in Li-mediated electrochemical ammonia synthesis (LiMEAS), one of the most promising electrochemical pathways for nitrogen fixation. Here, quantification of major side products, multiscale imaging, and spectroscopic analysis were conducted methodically in four model systems, which introduced the presence of nitrogen gas and a proton donor separately. This study revealed that the electrolyte-derived SEI inhibits reactivity between Li and nitrogen, and that the proton donor is needed to disrupt this passivating interphase. &#13;
&#13;
Next, focus shifted to Li metal battery anodes. Lithium carbonate has long been considered beneficial in anode SEI, but the field has lacked a mechanistic explanation for its effects. Here, lithium carbonate was studied through the development of two model systems, a model SEI formed by sequentially reacting oxygen and carbon dioxide with metallic lithium, and Li-copper (Cu) half cells saturated with either argon or carbon dioxide. Through electrochemical impedance analysis on the model SEI, lithium carbonate was found to exhibit elevated conductivity compared to other common inorganic SEI materials. Cycling and subsequent titration analysis of Li-Cu cells revealed that carbon dioxide addition led to less inactive lithium formation during cycling, and that this avoided capacity loss was the driver behind increased Coulombic efficiency (CE) in numerous electrolytes. &#13;
&#13;
Finally, an analysis was conducted to decipher unresolved materials in a set of techniques for the quantitative analysis of Li anode Coulombic inefficiencies. These techniques directly quantify capacity losses from formation of inactive lithium and several SEI materials, but lack the ability to delineate between residuals lost during cycling or sample processing, and SEI materials not yet resolvable by quantitative techniques. Here, a set of measurements was developed to explicitly measure material losses during sample processing steps. This work confirmed that material losses do not alter broader trends between electrolytes, validating simpler approaches for electrolyte comparisons while also offering a protocol that can be used when quantitative material accounting is of particular importance. &#13;
&#13;
Together, these studies illustrate a multimodal approach for deriving mechanistic insights into the relationships between SEI composition and electrode performance.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159894</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher dimensional fractal uncertainty</title>
<link>https://hdl.handle.net/1721.1/159893</link>
<description>Higher dimensional fractal uncertainty
Cohen, Alex
We prove that if a fractal set in Rᵈ avoids lines in a certain quantitative sense, which we call line porosity, then it has a fractal uncertainty principle. The main ingredient is a new higher dimensional Beurling and Malliavin multiplier theorem, which allows us to construct band-limited functions that decay rapidly on line porous sets. To prove this theorem, we first explicitly construct certain plurisubharmonic functions on Cᵈ. Then, following Bourgain, we use Hörmander’s L² theory for the ¯∂ equation to construct band-limited functions. The main theorem has since been applied by Kim and Miller to lower bounds for the mass of eigenfunctions on higher dimensional hyperbolic manifolds.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159893</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Invertible Functorial Field Theory for Symmetry Breaking and Interactions in Quantum Field Theory</title>
<link>https://hdl.handle.net/1721.1/159892</link>
<description>Invertible Functorial Field Theory for Symmetry Breaking and Interactions in Quantum Field Theory
Krulewski, Cameron
We apply invertible field theories to study two questions in quantum field theory. Specifically, we study reflection-positive fully-extended invertible field theories on manifolds with twisted spin structures, which are computed as Anderson-dual bordism groups [1, 2].&#13;
&#13;
In high energy physics, invertible field theories represent anomalies of quantum field theories. Our first application is toward ’t Hooft anomaly matching—a method first developed in the 1980s in which one treats anomalies as invariants of theories of interest and uses them to compute how quantum field theories change under physical processes. Specifically, we model three related processes around a form of spontaneous symmetry breaking via a charged order parameter using a twisted Gysin sequence of Anderson-dual bordism groups. We study the Smith maps of Madsen-Tillmann spectra that underlie the sequence, collecting examples and cataloging periodicities. Finally, we compute an extensive set of examples of physical interest and draw physical predictions from the results.&#13;
&#13;
In condensed matter physics, invertible field theories model the low energy field theories of symmetry-protected topological phases (SPTs). In this second application, we develop and compute homotopical free-to-interacting maps to compare two classifications of fermionic SPTs: those for free (i.e. non-interacting) models, and more general interacting classifications. These maps contribute to what has been a prolific line of research in the physics literature for the past fifteen years. Generalizing Freed--Hopkins [1], we construct maps from K-theory to twisted spin IFTs using T-duality and twisted versions of the spin orientation of K-theory [3]. We focus on two situations: weak phases [4, 5], which are SPTs protected by discrete translation symmetry, and primed phases [6], which are closely related to the famous tenfold way [7, 8], but which have a very different interacting classification. In the latter case, we demonstrate the dependence of the interacting classification on more than the Morita class of the symmetry algebra.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159892</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Tradeoffs and Symmetry in Polynomial Nonnegativity</title>
<link>https://hdl.handle.net/1721.1/159891</link>
<description>Computational Tradeoffs and Symmetry in Polynomial Nonnegativity
Harris, Mitchell
Understanding when a polynomial is nonnegative on a region is a fundamental problem in applied mathematics. Although exact conditions for nonnegativity are computationally intractable, there has been a surge of recent work giving sufficient conditions for nonnegativity to address its many practical applications. A major trend in this direction has been the use of convex optimization to characterize polynomials that are sums of squares (SOS); nevertheless, this well-studied condition can be computationally intensive for polynomials of moderate degree and dimension. &#13;
This thesis addresses the challenge of balancing computational cost against the strength of sufficient conditions for nonnegativity. We make progress towards bridging the gap between simple but crude sufficient conditions, and the more powerful but expensive SOS approach.&#13;
In the first part, we introduce new certificates of nonnegativity that may be used when SOS is too expensive yet cheaper sufficient conditions are too conservative. For this, we leverage different features of the polynomials, including its Bernstein coefficients, a lower-degree interpolant, or its harmonic decomposition.&#13;
In the second part, we construct coordinate-invariant sufficient conditions for nonnegativity and study the symmetry properties of the space of Gram matrices. By considering it as a representation of GL(n,R) and combining this module structure with classical invariant theory, we construct an explicit equivariant map for nonnegativity certification. We further introduce an alternative approach using equivariant neural networks, analyzing their benefits and limitations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159891</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affine Springer Fibers and the Kazhdan-Lusztig Map</title>
<link>https://hdl.handle.net/1721.1/159890</link>
<description>Affine Springer Fibers and the Kazhdan-Lusztig Map
Chua, Anlong
Let G be a connected reductive group with Lie algebra g and Weyl group W. Let P ⊂ G((t)) be a parahoric subgroup with Levi quotient Gₚ. Using the topology of Lie P, Kazhdan and Lusztig define a map from nilpotent orbits in Lie Gₚ to conjugacy classes in W. This thesis proves compatibilities between Kazhdan-Lusztig maps associated to different parahoric subgroups, as well as the Kazhdan-Lusztig map for the Langlands dual. These compatibilities come from studying the W-representation on the cohomology of affine Springer fibers. The main tool is Yun’s Global Springer Theory. We give two applications of these compatibilities. The first is an affine analog of the classical picture relating singular supports of IC sheaves on the flag variety with special nilpotent orbits. The second is a resolution of Lusztig’s conjecture that strata can be described by fibers of (parahoric) Kazhdan-Lusztig maps.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159890</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Changing Role of Reactive Nitrogen in the Troposphere</title>
<link>https://hdl.handle.net/1721.1/159889</link>
<description>The Changing Role of Reactive Nitrogen in the Troposphere
Dutta, Ishir
Nitrogen is the most abundant molecule in the Earth’s atmosphere and is one of the essential ingredients for life as we know it. Human activities, especially over the last century, have radically perturbed the natural nitrogen cycle, primarily via emissions of reduced nitrogen from the production and use of fertilizer for agriculture and of oxidized nitrogen from the combustion of fossil fuels. Nitrogen oxides play a central role in driving tropospheric chemistry and are a key ingredient of fine particulate matter, acid rain, and ozone. However, despite this long-understood importance of reactive oxidized nitrogen (NOy) species, even modern chemical transport models have struggled to accurately represent their chemistry.&#13;
&#13;
This thesis spans three projects that seek to characterize and explain possible sources of this uncertainty. The first project presents a comprehensive budget of reactive oxidized nitrogen in the troposphere using a state-of-the-science chemical transport model, and observational constraints for this budget from remote troposphere flight campaign data. We also provide modeled estimates for the chemical fluxes between key NOy species, finding that species beyond those that have been the foci of previous work play a crucial role in driving overall chemical cycling. In the second project we explore the sensitivity of this NOy budget to uncertain multiphase chemistry, including the photolysis of nitrate aerosol, the reactive uptake of nitrogen dioxide on aerosol surfaces, and the uptake of nitric acid on dust. We find that these processes may have substantial regional or temporal importance, but they have limited effects on the global NOy budget and are insufficient to explain inter-model discrepancies. Finally, we investigate the utility of long-term wet and dry deposition measurements made in the continental United States as a constraint on regional anthropogenic emissions trends of acid rain precursors (nitrogen and sulfur oxides). We find that dry deposition fluxes follow anthropogenic emissions trends, and wet deposition fluxes are likely more representative of total regional emissions (natural and anthropogenic). Taken together, these studies provide novel, holistic constraints on reactive oxidized nitrogen and identify key chemical processes that govern the fate of NOy in the troposphere. As anthropogenic emissions continue to decline and the effects of climate change intensify, these insights and such a framework will be useful in accurately predicting future atmospheric chemistry and composition.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159889</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric representation learning for chemical property&#13;
prediction, structure elucidation, and molecular design</title>
<link>https://hdl.handle.net/1721.1/159888</link>
<description>Geometric representation learning for chemical property&#13;
prediction, structure elucidation, and molecular design
Adams, Keir Alexander Joseph
Molecular representation learning has revolutionized computer-aided chemistry by enabling the automatic extraction of arbitrarily complex patterns from datasets of (potentially labeled) molecular structures via deep neural networks. In predictive chemistry, deep learning is increasingly being used to replace expensive physics-based simulations and even experimental measurements of chemical properties. In generative chemistry, deep generative models are powering molecular design and optimization campaigns across chemical industries. Notably, this paradigm shift has been driven by the development of sophisticated representation learning algorithms that encode and decode molecular structures with increasing geometric detail – from minimal SMILES strings to elaborate atomistic structures. Yet, many aspects of molecular structure remain neglected by leading geometric representation learning models. Accordingly, this thesis advances the geometric representation learning of molecular structure to create new opportunities in chemical property prediction, structure elucidation, and molecular design. This thesis begins by highlighting surprising failure modes of graph neural networks when predicting properties dependent on chirality and conformational isomerism. A new stereochemistry-tailored model is then developed to imbue graph networks with tetrahedral chiral expressivity while evading pitfalls plaguing preceding 2D and 3D graph networks. This thesis then examines how the geometric quality of structures encoded by 3D networks impacts their accuracy in property prediction tasks requiring the model to reason about conformational flexibility. Neglecting structural characteristics that are challenging to model is also common in computational chemistry. In nuclear magnetic resonance (NMR) prediction, for example, quantum chemical calculations typically estimate magnetic shieldings from stationary gas-phase geometries – ignoring vibrations and explicit solvent. To advance chemical structure elucidation, this thesis next develops neural surrogates for magnetic shielding calculations that, when integrated with molecular dynamics simulations, provide access to unprecedented accuracy in solvent-sensitive NMR spectra prediction. Finally, this thesis advances de novo molecular design by explicitly representing 3D shapes, electrostatics, and non-covalent interactions in deep generative models for small molecules. A shape-conditioned variational autoencoder is first developed to design chemically diverse molecules that can adopt desired conformational shapes, like ligand binding poses. This strategy is then generalized into a powerful interaction-aware diffusion modeling framework to comprehensively enable bioisosteric replacement in ligand-based drug design.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159888</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Even Parity Perturbations of the Janis-Newman-Winicour&#13;
Singularity</title>
<link>https://hdl.handle.net/1721.1/159887</link>
<description>Even Parity Perturbations of the Janis-Newman-Winicour&#13;
Singularity
Black, Brennen J.
In this paper we build upon previous works on odd-parity perturbations to the Janis-Newman-Winicour singularity by extending the analysis to even-parity perturbations. Perturbations to the metric can be decomposed using tensor spherical harmonics and Fourier decomposition, and are further reduced by gauge transformations. By calculating the Einstein field equations and the divergence of the stress-energy tensor, one obtains 8 independent radial equations for the first-order metric perturbations and scalar field perturbation. Through a suitable functional transformation, one can determine a coupled wave equation between a perturbing function and the scalar field, which is most naturally solved using numerical integration techniques. Following this analysis, we briefly discuss the notion of boundary conditions for a globally naked singularity which are essential to proposing a well-defined perturbation problem.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159887</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Foundations of Flow-based Methods for Sampling and Generative Modeling</title>
<link>https://hdl.handle.net/1721.1/159886</link>
<description>Theoretical Foundations of Flow-based Methods for Sampling and Generative Modeling
Ren, Zhi (Robert)
Sampling from an arbitrary probability distribution is a central problem in computational statistics and machine learning. Transportation of measure offers a useful approach to this problem: the idea is to construct a measurable map that pushes forward a relatively simple source distribution to the target probability distribution. One can then simulate from the target distribution by drawing samples from the source distribution and evaluating the transport map. This construction is applicable to both generative modeling and variational inference; when the map is invertible, one can also estimate the density of the target measure by evaluating the density of the pushforward of the source distribution under the inverse transport map. Over the past decade, various parameterizations of such transports have been proposed. Generally speaking, they fall into two categories: the static approach, where the displacement from x to T(x) is represented directly and the dynamic approach that employs evolution of measures by some differential equation over some fictitious time. While many of these models have achieved enormous success in practical applications, their theoretical underpinnings remain largely unexplored. In this thesis, we provide a theoretical foundation for flow-based methods for sampling and generative modeling, and a unified view of both continuous and discrete-time approaches. In the first part of the thesis, we address the approximation theory of flow based methods. In particular, we show how the regularity of the underlying ODE velocity field relates to the regularity of densities and prove related neural network approximation bounds. In addition, we show how introduction of a time-reparameterized schedule can dramatically improve the regularity of the velocity, helping resolve potential singularities. In the second part of the thesis, we focus on the interplay between flow-based models and nonparametric statistics. In particular, we consider pullback density estimators under these flow based models from likelihood-based objectives. The estimators we consider arise from both discrete and continuous-time parameterizations of the transport, and the underlying function classes we consider include Hölder balls and neural networks. In all these cases, we show they achieve near minimax optimal rate for learning s-smooth densities.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159886</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dual Pairs and Disconnected Reductive Groups</title>
<link>https://hdl.handle.net/1721.1/159885</link>
<description>Dual Pairs and Disconnected Reductive Groups
Gaetz, Marisa
In R. Howe’s seminal paper, “Remarks on classical invariant theory,” he introduces the notion of a Lie algebra dual pair (a pair (g₁, g₂) of reductive Lie subalgebras of a Lie algebra g such that g₁ and g₂ equal each other’s centralizers in g) and the notion of a Lie group dual pair (a pair (G₁, G₂) of reductive subgroups of a reductive Lie group G such that G₁ and G₂ are each other’s centralizers in G). Both notions have since been widely used and studied. This thesis extends what is known about the classifications of complex reductive Lie group and Lie algebra dual pairs, and establishes a step towards a more general framework for understanding complex reductive Lie group dual pairs. In the first part of this thesis, we classify the reductive dual pairs in the complex classical Lie groups: GL(n, C), SL(n, C), O(n, C), SO(n, C), and Sp(2n, C). We also establish some general relationships between Lie group dual pairs and dual pairs in corresponding Lie algebras and quotient groups. These relationships lead to complete classifications of the reductive dual pairs in the complex classical Lie algebras (gl(n, C), sl(n, C), so(n, C), and sp(2n, C)) and preliminary progress towards classifying dual pairs in the projective classical groups (P GL(n, C), P Sp(2n, C), P O(n, C), and P SO(n, C)). In the second part of this thesis, we complete an explicit classification of the semisimple Lie algebra dual pairs in the complex exceptional Lie algebras, initially outlined by H. Rubenthaler in a 1994 paper. This explicit classification makes Rubenthaler’s 1994 result more complete, usable, and understandable. A major obstacle to understanding reductive Lie group dual pairs is their potential disconnectedness. Inspired in part by this obstacle, in the third part of this thesis we describe the possible disconnected complex reductive algebraic groups E with component group Γ = E/E₀. We show that there is a natural bijection between such groups E and algebraic extensions of Γ by Z(E₀). Finally, in the last part of this thesis we classify the reductive dual pairs in P GL(n, C). While the connected dual pairs in P GL(n, C) can be easily understood using tools from the first part of this thesis, the classification of the disconnected dual pairs in P GL(n, C) is much more difficult and requires tools from the third part of this thesis. This serves as the first complete classification of dual pairs in a non-classical group and as a step towards understanding how disconnectedness factors into the classification of dual pairs more generally.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159885</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Average Size of 2-Selmer Groups of Elliptic Curves in Characteristic 2</title>
<link>https://hdl.handle.net/1721.1/159884</link>
<description>The Average Size of 2-Selmer Groups of Elliptic Curves in Characteristic 2
Achenjang, Niven
Let K be the function field of a smooth curve B over a finite field k of arbitrary characteristic. We prove that the average size of the 2-Selmer groups of elliptic curves E/K is at most 1 + 2ζʙ(2)ζʙ(10), where ζʙ is the zeta function of B. In particular, in the limit as q = #k ! ∞ (with the genus g(B) fixed), we see that the average size of 2-Selmer is bounded above by 3, even in “bad” characteristics. This completes the proof that the average rank of elliptic curves, over any fixed global field, is finite. Handling the case of characteristic 2 requires us to develop a new theory of integral models of 2-Selmer elements, dubbed “hyper-Weierstrass curves.”
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159884</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-shore Transformation of Breaking Random Waves&#13;
in the Surfzone</title>
<link>https://hdl.handle.net/1721.1/159883</link>
<description>Cross-shore Transformation of Breaking Random Waves&#13;
in the Surfzone
Chen, Jinshi
The transformation of breaking waves in the surfzone, including the evolution of the roller, the foamy air-water mixture on the surface of a breaking wave, and the turbulence, determines the wave-driven onshore-directed mass transport, the vertical structure of the compensating return flow (undertow), and the increase in the mean water level (setup). A two-phase Reynolds-Averaged Navier-Stokes (RANS) model and field and laboratory observations are used to study the cross-shore transformation of the roller, turbulence, and undertow resulting from irregular breaking waves. Modeled wave heights, wave spectra, setup, and undertow agree well with field and laboratory observations on barred and unbarred bathymetry. The roller forcing contributes 50% - 60% to the setup. The horizontal advection and turbulence each contribute ∼ 20% to the setup, whereas the contribution of bottom stress is largest (up to 20%) for shallow sandbar crest depths. The majority of the energy transferred to the roller is dissipated internally, while 15% - 25% of the energy in breaking waves first is transferred to the roller and then diffused back to the water column. Internal dissipation of roller energy increases with increasing depth of the sandbar crest, possibly indicating a change from plunging to spilling breakers. The momentum flux balance in the mid- and lowerwater column is between the wave, vertical turbulence transfers, vertical inertia, and setup, whereas near the surface the roller and pressure slope are important. Turbulence transports momentum downwards, while vertical inertia transfers momentum upwards. Turbulence production dominates the near-surface turbulence-energy-flux balance, and its penetration depth in the trough onshore of the sandbar is correlated with the local wave height. The roller thickness is related to the local wave height. Surfzone turbulence is more anisotropic than plane-wake turbulence, and is dominated by cross-shore normal stresses. Cross-shore vertical two-dimensional anisotropy is dependent on the cross-shore position in the surfzone, vertical shear of the cross-shore current, wave directional spread, frequency, and proximity to the seafloor. The three dimensional turbulence structure is related to the total vertical current shear, and to the directions of both mean currents and waves. Horizontal turbulence length scales are larger than the vertical length scales, consistent with prior studies.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159883</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Artificial Intelligence for Efficient and Synthesizable In-silico Molecular Design</title>
<link>https://hdl.handle.net/1721.1/159882</link>
<description>Advancing Artificial Intelligence for Efficient and Synthesizable In-silico Molecular Design
Gao, Wenhao
Small organic molecules possess an astronomical number of structural possibilities and a wide range of functionalities, holding immense potential to provide material-level solutions to critical societal challenges such as health and the environment. However, the discovery of molecules with functionalities tailored to specific applications remains a challenging, timeconsuming, and resource-intensive process, often relying on trial-and-error experimentation. Recent advances in computational techniques—particularly in artificial intelligence—offer promising solutions to this inefficiency. These developments are paving the way toward a more systematic and efficient approach to molecular discovery, enabling the design of novel functional molecules tailored to specific needs and accelerating the development of solutions to urgent issues in health, sustainability, and energy. This thesis presents algorithmic advances in artificial intelligence, particularly deep learning, for de novo molecular discovery, framed as a black-box optimization problem with a focus on small organic molecules. The contributions span three core aspects: The first section focuses on improving the sample efficiency of molecular optimization. A central capability of any molecular design algorithm is to determine which direction to explore next within chemical space in order to identify molecules with more optimal properties, given a limited set of known examples. Due to the inherent trade-off between computational efficiency and predictive accuracy in modeling methods, it is crucial to evaluate as few candidate molecules as possible to identify the optimal structure. This section introduces the problem formulation and benchmarking efforts for sample-efficient molecular optimization, followed by several approaches aimed at enhancing efficiency. The second section addresses the challenge of ensuring synthetic accessibility during molecular design. For small organic molecules with non-trivial syntheses, any design that cannot be realized in the lab has limited practical value. This presents a unique constraint in small molecule design that often renders direct adoption of algorithms developed for language or vision tasks ineffective. After framing the problem, this section introduces a generative modeling framework that integrates synthesis and design, ensuring that the search is constrained to synthesizable chemical space. It further introduces the concept of “generative molecular projection” and demonstrates its application in balancing sample efficiency and synthetic feasibility. The third section targets the improvement of oracle accuracy for molecular discovery. Achieving both accurate and efficient prediction of molecular properties has long been a central goal in computational chemistry. While deep learning has shown promise in breaking the traditional trade-off between accuracy and efficiency by leveraging large-scale historical data, its full potential—especially for directly learning experimentally measured bioactivities under data-scarce conditions—has yet to be realized. This section presents a benchmarking effort on applying deep learning to therapeutic-related property prediction, and introduces substrate scope contrastive learning as a strategy to learn reactivity-related patterns from published reaction datasets. Together, these three components present a systematic, data-driven methodology for small organic molecule discovery that minimizes the need for extensive domain expertise. The algorithms developed in this thesis are designed to support autonomous workflows, potentially enabling closed-loop molecular discovery that maximizes efficiency and reduces both cost and reliance on human intuition. While the demonstrations in this thesis primarily target pharmaceutical applications, the methods are task-agnostic and can be readily extended to broader material discovery efforts.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159882</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Competitive Sorption in Microporous Polymer Membranes to Enhance Gas Separation Performance</title>
<link>https://hdl.handle.net/1721.1/159881</link>
<description>Leveraging Competitive Sorption in Microporous Polymer Membranes to Enhance Gas Separation Performance
Dean, Pablo A.
Chemical separations account for roughly half of the United States’ industrial energy consumption, 49% of which is attributed to distillation alone. Membrane-based systems, on the other hand, offer a more energy-efficient alternative to conventional separation processes because they do not require thermally intensive phase changes to operate. Specifically, polymer membranes with a more permanent porosity (termed “microporous”) have gained attention due to their impressive combination of permeability (throughput) and permselectivity (separation efficiency) relative to the empirically defined “upper bound” for membrane materials.&#13;
Traditionally, the permeability of a membrane for a gas is defined by the product of the gas’s diffusivity and sorption coefficient in the material.  By extension, a membrane’s permselectivity can be broken down into the product of its diffusion selectivity and sorption selectivity. Microporous polymer membranes exhibit impressive diffusion selectivity due to their small free volume elements (&lt; 2 nm) and rigid backbones. However, separating gases based primarily on size can become exceedingly difficult given that some gases differ in kinetic diameter by less than an angstrom. Instead, recent advancements in the design of microporous polymers have indicated that a phenomenon known as competitive sorption can be used to enhance separation performance by leveraging gas–polymer interactions instead of differences in gas diffusivity. This thesis investigates how the increase of sorption selectivity through competition between gases can be exploited to enhance the permselectivity of microporous polymer membranes. Specific focus is placed on the archetypal polymer of intrinsic microporosity (PIM-1) and its amine-functional analog (PIM-NH₂) to study how enhanced acid-gas (CO₂ and H₂S) sorption brought on by amine functionality positively impacts separation performance. To confirm the generalizability of these trends, competition effects in the microporous poly(arylene ether) (PAE) backbone were studied as well. To investigate more industrially viable membranes while retaining strong gas–polymer interactions afforded by the amine group, this PAE backbone was also used to develop 8 solution-processable tertiary-amine-functional analogs. Lastly, in an effort to study the effects of water vapor on CO₂-focused separations in amine-functional microporous polymer membranes, a humidified gas permeation apparatus was developed and used to measure dry and humidified CO₂ transport in PIM-1, PIM-NH₂, and a novel secondary-amine-functional analog, PIM-NHiPr. Taken together, this thesis focuses on the fundamentals and practical implications of leveraging competitive sorption to enhance performance in application-relevant and multi-component gas mixtures. More specifically, this work provides valuable insight regarding amine functionalization and its strong effects on sorption energetics and humidified gas transport that will help to inform future design of polymer membranes for gas separations.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159881</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accurate Protein Function Prediction with Graph Transformer-Based Function Localization</title>
<link>https://hdl.handle.net/1721.1/159880</link>
<description>Accurate Protein Function Prediction with Graph Transformer-Based Function Localization
Mitra, Shania
Protein function prediction is a fundamental challenge in biology, crucial for understanding biological processes, disease mechanisms, and accelerating drug discovery. While computational methods leveraging sequence or structural information have advanced, accurately translating protein structure to function and pinpointing the specific residues responsible remain significant hurdles. Many existing deep learning approaches fall short, often relying on post-hoc analyses that lack specificity or fail to directly integrate functional site identification into the prediction process. In this study, we introduce the Protein Region Proposal Network (ProteinRPN), a novel graphbased deep learning framework designed to address these limitations. ProteinRPN is the first model to integrate the proactive identification of functional regions within the Gene Ontology term prediction pipeline. The core of the model is a Region Proposal Network module that processes protein structure graphs (residues as nodes, contacts as edges) to identify potential functional regions, termed anchors. These anchors are subsequently refined using a multi-stage process involving a novel differentiable node drop pooling layer that incorporates domain knowledge. A functional attention layer further enhances the representations of predicted functional nodes, and a Graph Multiset Transformer aggregates this localized information into a comprehensive graph-level embedding for final prediction. The model is optimized using a combination of cross-entropy classification loss, supervised and self-supervised contrastive learning losses (SupCon and InfoNCE) for robust representation learning. Evaluated on standard benchmarks derived from the DeepFRI/HEAL datasets, ProteinRPN demonstrates state-of-the-art performance, consistently outperforming existing sequencebased and structure-based methods across all three Gene Ontology domains (Molecular Function, Biological Process, Cellular Component) based on standard CAFA metrics (Fmax, AUPR, Smin). Notably, ProteinRPN achieves significant improvements over strong baselines like HEAL, with AUPR (Area under Precision Recall curve) gains of approximately 15.4% (BP), 8.5% (CC), and 1.3% (MF). Furthermore, ablation studies validate the contribution of each key component, particularly the region proposal mechanism. Qualitative analysis confirms the model’s ability to accurately localize known functional residues within protein structures, offering enhanced interpretability. By directly modeling and identifying functionally relevant structural regions, ProteinRPN presents a robust, interpretable, and high-performing approach to structure-based protein function prediction. This work contributes a novel framework that bridges the gap between structural information and functional annotation, offering potential for deeper biological insights and advancing computational tools for understanding the proteome.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159880</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Catalytic implications of confined solvent ensembles within Lewis acid zeolites</title>
<link>https://hdl.handle.net/1721.1/159879</link>
<description>Catalytic implications of confined solvent ensembles within Lewis acid zeolites
Johnson, Blake A.
Lewis acidic zeolites are microporous crystalline materials that offer promise as catalysts for the activation and conversion of biomass-derived precursors in the liquid-phase due to their unique water-tolerance and synthetic versatility. The active site environment in zeolite catalysts is multifaceted in nature and is composed of a primary catalytic binding site, the secondary pore structure that confines such binding sites, and occluded solvent and reactant molecules that interact with adsorbed species.  Moreover, Lewis acidic heteroatoms can adopt structurally diverse coordination that selectively catalyze different classes of chemical transformations and can be difficult to control synthetically or characterize spectroscopically.  In this thesis, precise mechanistic interpretation of liquid-phase zeolite catalysis was realized through the development of synthetic, spectroscopic, and kinetic methods that decouple complex active site structures and probe the interactions that occur between confined active sites, solvent and reactant molecules, and adsorbed intermediates and transition states.&#13;
&#13;
First, we show how hydrophobic Beta zeolites containing framework Sn atoms catalyze transfer hydrogenation reactions of cyclohexanone in a 2-butanol solvent 10x faster than their hydrophilic analogues. This rate enhancement stems from the ability of hydrophobic Sn-Beta to inhibit the formation of extended liquid-like 2-butanol oligomers and promote dimeric H-bonded 2-butanol networks. The ordered H-bonding solvent network present in hydrophobic Sn-Beta stabilizes the transfer hydrogenation transition state to a greater extent than the liquid-like 2-butanol solvent present in hydrophilic Sn-Beta, giving rise to higher turnover rates on hydrophobic Sn-Beta. Additionally, reactant adsorption within hydrophobic Sn-Beta is entropically-driven by the breakup of intraporous solvent-solvent interactions, resulting in positive enthalpies of adsorption that are partially compensated by an increase in the solvent reorganization entropy. These results emphasize the ability of the zeolite pore to regulate the structure of confined non-aqueous H-bonding solvent networks, which offers an additional dimension to modulate adsorption and reactivity.&#13;
&#13;
Next, we extend our studies to understand how different intraporous alcohol networks reorganize in response to adsorbate sterics and the presence of non-H-bonding co-solvents. Here, we find that first-order rates for methyl-cyclohexanone transfer hydrogenation are ~2-5x higher than for tert-butyl-cyclohexanone, but converge in the zero-order regime across all temperatures (333-393 K) in a bulk 2-butanol solvent. These results show that, while intrinsic bond-activation steps at the active site are largely independent of molecular functionalization of the ketone reactant, adsorption within hydrophobic Sn-Beta is still driven by the breakup of intraporous solvent-solvent interactions. Furthermore, comparisons between bulk toluene or acetonitrile solvents, with 1 M 2-butanol as a reactant, show the significance of intraporous solvent for stabilizing kinetically-relevant species and the complex interdependencies between solvent and catalyst hydrophilicity. Apparent zero-order activation enthalpies and entropies increase with decreasing solvent polarity over hydrophobic zeolites indicating that the transition state is more tightly bound to the open Sn site when first-shell solvent molecules become more polarizing. Conversely, adsorption and activation entropies and enthalpies measured on hydrophilic zeolite in toluene and acetonitrile solvents are nearly identical to those measured in a bulk 2-butanol solvent, suggesting that the intraporous solvating environment in bulk, non-H-bonding co-solvents is similar to that observed when bulk 2-butanol is the solvent. &#13;
&#13;
Finally, we exploit the ability of carbonyl groups to measure electric field differences arising from the different intraporous solvent structures through the vibrational Stark effect. By measuring infrared absorption spectra of Ti-bound acetone in Beta zeolites of varying framework hydrophobicity across a wide range of non-coordinating solvents, we find unique electric field differences arising from distinct solvation under nanoconfinement. Moreover, in the absence of intraporous solvent, we observe a ~7 cm-1 shift in the Ti-bound carbonyl stretching frequency. These results suggest that local differences in the Lewis acid site environment, which influence observed kinetics across reaction classes, arise from the synthetic protocol used to produce each material. &#13;
&#13;
Taken together, the results of this thesis reveal how different solvent-mediated, non-covalent interactions control liquid-phase reactivity within porous, Lewis acid zeolite catalysts. It is our hope that the kinetic and spectroscopic approaches advanced here will provide a useful roadmap for further experimental investigations into the catalytic implications of confined solvent.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159879</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Advances for Fair and Efficient Decision-Making in Online Platforms</title>
<link>https://hdl.handle.net/1721.1/159878</link>
<description>Algorithmic Advances for Fair and Efficient Decision-Making in Online Platforms
Chen, Qinyi
Modern online platforms—such as recommendation systems, advertising markets and e-commerce sites—operate in dynamic and complex environments where efficient algorithmic decision-making is essential. These platforms must continuously adapt to rapidly changing user behaviors, market fluctuations, and data uncertainties while optimizing for both learning efficacy and revenue generation. However, focusing solely on performance can lead to biased outcomes and inequitable treatment of users and items, raising concerns about fairness. Balancing efficiency and fairness is therefore crucial for sustainable platform growth. In this thesis, we tackle these challenges by developing novel algorithmic frameworks and methods that integrate fairness considerations with robust learning and optimization techniques. We explore these problems from three distinct perspectives, each contributing to enhancing the decision quality and fairness considerations in online decision-making.&#13;
&#13;
In Chapter 2, we first focus on the topic of efficiency, by addressing the challenge of performing online learning in a highly non-stationary environment. User behaviors and preferences often change over time, making it difficult for traditional algorithms to maintain good performance. This issue is particularly prevalent in real-world applications such as recommendation systems and advertising platforms, where shifts in user dynamics can undermine decision-making efficacy. To tackle this, we propose a novel algorithm for the widely adopted multi-armed bandit framework that enables platforms to adaptively learn in a fast-changing environment characterized by auto-regressive temporal dependencies.&#13;
&#13;
In Chapter 3, we shift our focus to the realm of fairness and explore how fairness considerations can be effectively integrated into the context of assortment planning. As algorithmic recommendations become integral to platform operations, a purely revenue-driven approach can result in highly imbalanced outcomes, leading to certain items receiving minimal exposure and exiting the platform in the long run. To address this, we develop a combinatorial optimization framework that incorporates fairness constraints, ensuring equitable exposure and opportunities for all items on the platform. We design a series of polynomial-time approximation algorithms to solve the fair assortment problem. Through numerical studies on both synthetic data and real-world MovieLens data, we showcase the effectiveness of our algorithms and provide insights into the platform's price of fairness.&#13;
&#13;
In Chapter 4, we bridge the topics of fairness and learning efficiency by examining how to achieve multi-stakeholder fairness in a multi-sided recommendation system. Here, the challenge is multifaceted, including ensuring high platform revenue, maintaining fair outcomes for diverse stakeholders, and enabling robust learning amidst data uncertainty. We propose a novel optimization framework that maximizes platform revenue while enforcing fairness constraints for both items and users, accommodating various fairness notions and outcome metrics. Building on this, we introduce a low-regret online learning and optimization algorithm that dynamically balances learning and fairness—two objectives that are often at odds. Finally, we demonstrate the efficacy of our approach via a real-world case study on Amazon review data and offer actionable guidelines for implementing fair policies in practice.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159878</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of traffic at Governor Square, Boston, with suggestions for its regulation</title>
<link>https://hdl.handle.net/1721.1/159853</link>
<description>A study of traffic at Governor Square, Boston, with suggestions for its regulation
Phisānsukhumwit,
            Phra.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1923
</description>
<pubDate>Mon, 01 Jan 1923 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159853</guid>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Axial vibration of steam turbine buckets</title>
<link>https://hdl.handle.net/1721.1/159852</link>
<description>Axial vibration of steam turbine buckets
Ewert, Richard H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1938; Includes bibliographical references (leaf 56).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159852</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The deflection of steam turbine diaphragms</title>
<link>https://hdl.handle.net/1721.1/159851</link>
<description>The deflection of steam turbine diaphragms
Prohl, Melvin Albert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1938
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159851</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new railway labor plan</title>
<link>https://hdl.handle.net/1721.1/159850</link>
<description>A new railway labor plan
Gilman, Jonathan C.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1963; Includes bibliographical references (leaves 119-121).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159850</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the great meadows area Lexington, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/159849</link>
<description>A study of the great meadows area Lexington, Massachusetts
Banks, Philip Oren.
Thesis: B.S., Massachusetts Institute of Technology, Department of Geology and Geophysics, 1958
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159849</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transient analysis of marine steam turbine, propeller and ship dynamics.</title>
<link>https://hdl.handle.net/1721.1/159848</link>
<description>Transient analysis of marine steam turbine, propeller and ship dynamics.
Stang Lund, Emil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1965; Bibliography: leaves 79-91.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159848</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The interaction of shock waves with porous materials</title>
<link>https://hdl.handle.net/1721.1/159847</link>
<description>The interaction of shock waves with porous materials
McMillan, Charles Frederick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1983; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159847</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An operational analysis of industrial research</title>
<link>https://hdl.handle.net/1721.1/159846</link>
<description>An operational analysis of industrial research
Freeman, Raoul J.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1957; Vita.; Bibliography: leaves 101-106.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159846</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Erosion test on valve and turbine metals</title>
<link>https://hdl.handle.net/1721.1/159845</link>
<description>Erosion test on valve and turbine metals
Blackett, Sydney W.; Golding, Harold B.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1923
</description>
<pubDate>Mon, 01 Jan 1923 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159845</guid>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural and biochemical characterization of RNA&#13;
polymerase II transcription</title>
<link>https://hdl.handle.net/1721.1/159835</link>
<description>Structural and biochemical characterization of RNA&#13;
polymerase II transcription
Su, Bonnie G.
Eukaryotic development requires precise temporal regulation of gene expression orchestrated through a series of complex mechanisms. One such mechanism involves pausing of RNA polymerase II (Pol II) in the promoter-proximal region of genes. Pausing is stabilized by the protein complexes DRB-sensitivity inducing factor (DSIF) and negative elongation factor (NELF). Prior structural and biochemical studies provide specific mechanisms for stabilization of paused Pol II by NELF. However, cellular data suggests that NELF can accompany actively elongating Pol II into the gene body, indicating that NELF may be able to associate with pol II without enforcing Pol II pausing. This thesis presents cryo-electron microscopy structures of Pol II-DSIF-NELF complexes with NELF in two distinct conformations on the surface of Pol II, the paused state and the poised state. The poised state does not support a tilted RNA-DNA hybrid, a key characteristic of pausing, indicating that NELF in the poised state is compatible with elongating Pol II. Furthermore, Pol II bound to NELF in the poised conformation can accommodate TFIIS binding simultaneously, allowing reactivation of Pol II at sites of pausing or backtracking. Finally, a region of the flexible NELF-A tentacle engages with the RPB2 protrusion, an interface necessary for pausing. These results define how NELF can support both Pol II pausing and elongation and provide the molecular basis for how transcription can be reactivated when NELF is bound to Pol II.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159835</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synaptic Multimodal Imaging and Molecular Network Inference</title>
<link>https://hdl.handle.net/1721.1/159834</link>
<description>Synaptic Multimodal Imaging and Molecular Network Inference
Falkovich, Reuven
All cognitive function is reliant on synaptic function – the molecular computation that integrates activity history, chemical environment, and the genetic state of its pre- and post-synaptic neurons to modulate neuron-neuron communication through synaptic plasticity. This computation is performed by the highly compartmentalized, tightly regulated, and complex network of interactions between synaptic activity and hundreds of proteins and the mechanisms that regulate them. Isolating individual processes loses the context in which they occur, while bulk analyses average over highly heterogeneous populations and lose correlation information. A top-down study of the entire system in action requires measurement of multiple synaptic parameters – composition and activity - simultaneously in individual synapses. Building on a previously developed probe exchange multiprotein imaging technique, this thesis presents MINI-ME, a versatile, modular platform for integrating multiple information modalities at single synapses. We developed an approach for tandem live-fixed imaging to combine synaptic calcium dynamics or glutamate spiking information with multiprotein measurements. We also developed an integration of rolling circle amplification-based in situ methods, such as a reporter on gene specific translation. We analyzed, based on simulated and experimental data, the application of Bayesian network inference to analyze high-dimensional multimodal synapse distributions to extract biological insight. Finally, we applied this new approach to in-depth investigation of synaptic molecular perturbations associated with autism and schizophrenia genetics and psychiatric drug activity
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159834</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sideroflexins enable mitochondrial transport of polar neutral amino acids</title>
<link>https://hdl.handle.net/1721.1/159833</link>
<description>Sideroflexins enable mitochondrial transport of polar neutral amino acids
Block, Samuel
Mitochondria contribute to compartmentalized metabolism in eukaryotic cells, facilitating diverse enzymatic reactions that support cell function. However, this compartmentalization of metabolism necessitates regulated transport of metabolites across the inner mitochondrial membrane. While many proteins enabling mitochondrial membrane transport of metabolites are known, how some metabolites are transported is not known, and several mitochondrial amino acid transporters are largely uncharacterized. The goal of this dissertation is to better understand which proteins in the mitochondrial inner membrane regulate amino acid transport, particularly for substrates that lack known transporters, and how these proteins regulate associated metabolic pathways. Using CRISPR-Cas9-mediated candidate transporter knockouts coupled with assessment of metabolite transport via a mitochondrial swelling assay, we identified SFXN1 as a gene that mediates mitochondrial membrane permeability to polar neutral amino acids, including proline, glycine, taurine, hypotuarine, beta-alanine, and gammaaminobutyric acid (GABA). SFXN2 and SFXN3 partially complemented loss of SFXN1 to enable glycine transport, while SFXN2 and SFXN5 partially complemented loss of SFXN1 to enable GABA transport. Altogether, this work suggests that sideroflexins regulate the delivery of polar neutral amino acids across the inner mitochondrial membrane, many of which lack known carriers, and contributes to a better understanding of how mitochondrial amino acid transport regulates cellular metabolism.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159833</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strengthening Value Chains for Developing and DeployingBatteries in the Global South</title>
<link>https://hdl.handle.net/1721.1/159832</link>
<description>Strengthening Value Chains for Developing and DeployingBatteries in the Global South
Munjal, Mrigi
This thesis presents an integrated assessment of the elements required to strengthen the battery industry in emerging markets. It articulates a synergistic approach to fostering resilient battery value chains that are critical for the sustainable energy transition in the Global South. The first part touches upon building a more diversified and secure raw material base is essential for robust battery value chains in developing economies. It establishes the groundwork by proposing a potential pathway to diversify the global lithium supply chain by examining the potential of lithium mining in Arkansas through stakeholder analysis and policy recommendations. The second part underscores the importance of technology adaptation and process innovation in developing cost-effective battery chemistries suitable for the distinct conditions of the Global South. This part of the thesis addresses the technological challenges in scaling up battery production, focusing on sodium-ion batteries (SIBs) as a promising alternative to lithium-ion systems. Through an innovative application of natural language processing, this analysis distills the vast landscape of SIB research to identify scalable solutions for electrode design and manufacturing. The final part of the thesis converges on the deployment aspect of batteries, scrutinizing the role of Battery Energy Storage Systems (BESS) in three distinct emerging markets: India, South Africa, and Malawi. It offers a granular perspective on the application of BESS within varied energy landscapes, advocating for the customization of storage solutions to local market realities. This illuminates the transformative potential of BESS for enhancing grid stability and enabling renewable energy integration, thereby empowering the Global South to leapfrog to a resilient and green energy paradigm. This thesis coalesces into a comprehensive framework that underscores the multifaceted aspects of value chain enhancement—from mineral sourcing and battery chemistry innovation to end-use applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159832</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Iterative Engineering, System Confidence, and In-space Servicing Assembly &amp; Manufacturing</title>
<link>https://hdl.handle.net/1721.1/159831</link>
<description>Iterative Engineering, System Confidence, and In-space Servicing Assembly &amp; Manufacturing
Luu, Michael A.
System Confidence is proposed as a method for quantifying the progress and performance of engineering systems. Confidence measures the degree of certainty that a system, design, or process will perform as intended. This metric aggregates existing tools in model-based systems engineering with requirement verification/test events. This is used to mitigate the obstacles of adopting iterative engineering for hardware systems due to the loosely defined requirements set at the beginning of these programs.&#13;
 &#13;
Iterative engineering has enabled the rapid progress of recent commercial space systems, reduced launch costs through reusable rockets, and deployed large-scale constellations to orbit. This engineering method has been made possible through recent advancements in digital engineering, additive manufacturing, scalable systems, modeling, and simulation software. Iterative engineering challenges the traditional V-model of systems engineering as the default and only choice in designing complex space systems. Rapidly testing novel payloads and satellite capabilities early in the design process can reduce development schedules and deliver capabilities earlier than traditional systems.&#13;
 &#13;
The advent of In-space Servicing, Assembly, and Manufacturing (ISAM) has ignited new prospects for system architectures, designs, and applications. Until now, iterative engineering has only leveraged in designing robust space systems to be resilient and self-reliant in post-launch deployment/operations. ISAM extends options, flexibility, and engineering decisions beyond the launch phase of space systems.&#13;
 &#13;
First, System Confidence is defined and quantified, measuring a system's capabilities throughout its development cycle. Second, System Confidence is used to evaluate three existing and historic space programs. Third, ISAM is explored as an extension of iterative engineering for space systems. Lastly, the relationship between iterative engineering outcomes and the utility of ISAM for a space systems architecture is analyzed.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159831</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution and engineering of protein-protein interactions</title>
<link>https://hdl.handle.net/1721.1/159830</link>
<description>Evolution and engineering of protein-protein interactions
Ghose, Ashavari (Dia)
Protein-protein interactions are crucial elements in most biological processes. The gain and loss of interactions during evolution have important phenotypic consequences that are subject to selection. Therefore, in a crowded cellular environment, proteins must evolve mechanisms to maintain the correct interactions and avoid inappropriate ones. In this work, I leveraged high-throughput methods for the functional characterization of thousands of protein variants to characterize the sequence spaces associated with paralogous families of interacting proteins. Protein families are formed by gene duplication and divergence, a common source of evolutionary novelty. Family members maintain conserved structural and sequence elements, and yet must often form distinct protein-protein interactions. To probe the extent to which the requirement for interaction specificity constrains evolution, I focused on the twocomponent system family of bacterial signaling proteins. I tested protein variants with all possible single substitutions in the interacting domain of a model protein for their ability to interact with a cognate partner protein and with closely related non-cognate partners. I found that a large fraction of substitutions introduce non-specific interactions, suggesting that paralogs only evolve ‘marginal specificity’ that can easily be disrupted. Bioinformatic evidence indicates that the resulting crowded local sequence space has restricted the evolvability of two-component systems. I also characterized the effects of environmental context constraints, specifically temperature, on the sequence space relevant to two-component system function. This revealed generally conserved sequence-function landscapes across temperatures, with small numbers of variants showing either temperature sensitivity or resistance. Biochemical characterization of these variants challenges existing paradigms relating to the effects of temperature on evolution. Finally, I utilized insights into the evolution of protein-protein interaction specificity to inform the design of protein binders to toxin-antitoxin systems. These binders are selective in their interactions with toxin homologs, and inhibit toxin-antitoxin-mediated bacterial anti-phage defense activity, suggesting their potential use in clinical phage therapy applications. Taken together, these results shed light on the role of protein-protein interactions and their specificity in shaping evolution and suggest the utility of leveraging interaction specificity for engineering purposes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159830</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Market value and financial structure in the railroad industry</title>
<link>https://hdl.handle.net/1721.1/159441</link>
<description>Market value and financial structure in the railroad industry
Nielsen, Scott.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1961; Includes bibliographical references (leaf 117).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159441</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>American medical views on England's National Health Service expressed in three American medical journals, 1948-1967</title>
<link>https://hdl.handle.net/1721.1/159440</link>
<description>American medical views on England's National Health Service expressed in three American medical journals, 1948-1967
Schwarz, John Stanley.
Thesis: B.S., Massachusetts Institute of Technology, Department of Humanities, 1967; Includes bibliographical references (leaves 70-72).
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159440</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A reinvestigation of the preparation of 1-carbethoxy-3-hydroxy-2-piperidylacetic acid</title>
<link>https://hdl.handle.net/1721.1/159439</link>
<description>A reinvestigation of the preparation of 1-carbethoxy-3-hydroxy-2-piperidylacetic acid
Wolpers, Jürgen Paul.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1957; Includes bibliographical references (leaves 20-21).
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159439</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some physical and rheological properties of human blood.</title>
<link>https://hdl.handle.net/1721.1/159438</link>
<description>Some physical and rheological properties of human blood.
Meiselman, Herbert Joel.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1966; Bibliography: p. 312-325.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159438</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A computer program for assessing the economics of hie[r]archical systems : an application in engineering project evaluation</title>
<link>https://hdl.handle.net/1721.1/159437</link>
<description>A computer program for assessing the economics of hie[r]archical systems : an application in engineering project evaluation
Hagen, Arnulf.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1990; Title as it appears in the M.I.T. Graduate List, Feb. 1990: A computer model for assessing the economics of hierarchical systems.; Includes bibliographical references (leaves [73]-85).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159437</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A model for groupoids of homeomorphisms</title>
<link>https://hdl.handle.net/1721.1/159436</link>
<description>A model for groupoids of homeomorphisms
Greenberg, Peter A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1982; Bibliography: leaf 84.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159436</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion optics of an electrostatic lens</title>
<link>https://hdl.handle.net/1721.1/159435</link>
<description>Ion optics of an electrostatic lens
Hangst, Jeffrey Scott.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159435</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solvent extraction in packed columns</title>
<link>https://hdl.handle.net/1721.1/159434</link>
<description>Solvent extraction in packed columns
Evans, James E.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1938; Vita.; Includes bibliographical references (leaf 142).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159434</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tissue-encoded Design Principles of Host Defense</title>
<link>https://hdl.handle.net/1721.1/159375</link>
<description>Tissue-encoded Design Principles of Host Defense
Misra, Aditya
Inflammatory diseases have been rising in incidence over the past few decades and are a result of inappropriate activation of tissue-resident immunity. This inappropriate activation&#13;
can be derived from any number of cell types, including immunoregulatory functions of non-immune cells such as epithelial cells. In this thesis, we investigated tissue metabolism&#13;
and inflammation across different temporal and spatial scales using a unique combination of metabolomics, mathematical modeling, metabolic assays, and chemical characterization.&#13;
Our aim was to identify pathways that protect against inflammation-induced tissue damage and improve clinical outcomes. Thus, we studied A) chronic local tissue inflammation using a colitis model (Chapter 2) and B) acute systemic inflammation using a sepsis model (Chapter 3). In each disease, we studied changes in tissue architecture and the resulting cross-talk among cell types in the microenvironment. In colitis, we found that upon release during tissue damage, IL-18 launches a unique metabolic program in macrophages that 1) exhibits bistable and hysteretic behavior, 2) provides protective memory against inflammatory challenge, and 3) relies on positive feedback with intestinal epithelial cells to maintain the program. In our mouse model of bacterial sepsis, we performed liver tissue metabolomics and found that branched-chain ketoacids (BCKAs), metabolic products of branched-chain amino acids, are released during systemic inflammation and serve as endogenous antioxidants that neutralize extracellular peroxides. They thus reduce tissue damage and increase survival rates by more than double. Through this thesis, we show tissue-intrinsic mechanisms that 1) organize positive feedback loops among cells to establish protective memory against inflammation and 2) secrete endogenous antioxidants to limit pathogenic extracellular oxidants induced by inflammation without quenching bactericidal intracellular oxidants.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159375</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Muscle Sensing Modalities for Advanced Bionics</title>
<link>https://hdl.handle.net/1721.1/159374</link>
<description>Enhancing Muscle Sensing Modalities for Advanced Bionics
Yeon, Seong Ho
Muscle sensing technologies have significantly advanced our understanding of biomechanics and enhanced the efficacy of bionic devices. These technologies enable volitional control of prostheses and assistive devices by mapping the electrical and mechanical activities of muscles as control inputs. This dissertation presents novel paradigms and findings to improve the utility and efficacy of muscle sensing modalities for advanced bionic applications.&#13;
&#13;
In the first part, I introduce a comprehensive approach to improve the acquisition and processing of surface electromyography (sEMG) signals for bionic applications. This includes innovations in electrode materials and design to enhance user comfort and signal quality for long-term use within prosthetic sockets. Additionally, I propose a real-time impulse filtering algorithm to effectively suppress artifacts while preserving the underlying sEMG signal during dynamic movements. Furthermore, I demonstrate a synchronous sEMG and ultrasound acquisition method that enables simultaneous assessment of muscle electrical activity and mechanical deformation, providing valuable insights into muscle function and control.&#13;
&#13;
In the second part, I explore how Magnetomicrometry can serve as a new in-vivo and real-time mechanical muscle state tracking modality. Previous work has shown significant potential for Magnetomicrometry in muscle-state tracking via a tightly-controlled in situ setup. In this work, I demonstrate real-time tracking of muscle tissue length in freely-moving animals performing various motor activities, suggesting that Magnetomicrometry could be extended as a viable in-vivo and real-time muscle sensing modality.&#13;
&#13;
In the final part, I propose a novel theoretical framework leveraging Riemannian geometry and manifold theory to enhance the magnet tracking technology stack for Magnetomicrometry. By representing the magnetic dipole state on a manifold and incorporating its dynamics, I develop a more accurate and robust magnet tracking algorithm that addresses the limitations of existing methods. Through simulations and real-world data evaluations, I demonstrate the superior performance of the proposed manifold-based tracking paradigm, showcasing its potential to improve the resolution and extend the observable depth of Magnetomicrometry.&#13;
&#13;
The advancements presented in this dissertation have significant implications for the development of next-generation bionic devices, enabling more adaptive, versatile, and reliable myo-neural interfaces. Through this work, I hope to open up new possibilities for the design and control of advanced prostheses and assistive technologies with the advanced myo-neural control interface.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159374</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>BIOSENTERO: Bioinspired Soft Enteroscopic Robot for Facilitating Locomotion, Steering, and Intervention in the Deep Small Intestine</title>
<link>https://hdl.handle.net/1721.1/159373</link>
<description>BIOSENTERO: Bioinspired Soft Enteroscopic Robot for Facilitating Locomotion, Steering, and Intervention in the Deep Small Intestine
Jebran, Ahmad Mujtaba
Diagnosing and treating small intestinal disorders such as bleeding, inflammatory bowel disease, and tumors pose significant challenges due to limitations in accessing this anatomical compartment. To address these challenges, we develop BIOSENTERO, a bioinspired soft enteroscopic robot, to facilitate deep small intestine procedures, which addresses challenges associated with locomotion, steering, and intervention faced by existing soft robotic systems. BIOSENTERO features a hollow-cylinder design consisting of a linearly deformable soft pneumatic actuator as the robotic body, two radially expandable soft pneumatic actuators wrapped with Kirigami sleeves as the robotic head and tail units, a central hollow channel for housing accessory endoscopic tools, and a control box and joystick for navigation. The robot's body is a fiber-reinforced actuator with four inflatable chambers, enabling versatile movements, including axial expansion and contraction and bending over 90 degrees for 360-degree planar access. The dynamic Kirigami sleeve design achieves clinically acceptable friction force on intestinal mucosa with radial expansion, while minimizing tissue distention. A reinforced central channel supports the passage of tools to facilitate diagnostic and therapeutic interventions. A control box supports efficient locomotion and steering, achieving autonomous speeds of ~100 mm/min in vitro and ~43 mm/min in ex vivo intestinal tissue, and an assisted speed of ~200 mm/min in pig studies, without overdistention. Through in vivo pig studies, we demonstrated BIOSENTERO's potential for tissue biopsies, localized drug delivery, and real-time visualization in the deep intestinal region, without causing tissue overdistention and damage.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159373</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Wireless Delamination Sensor</title>
<link>https://hdl.handle.net/1721.1/159372</link>
<description>Structural Wireless Delamination Sensor
Ghosh, Aniruddha
Composite materials, particularly laminated fibre reinforced polymer composites, have gained widespread acceptance in various industries due to their superior strength-to-weight ratio and corrosion resistance. The phenomenon of cracking between plies/laminae of such a layered composite is commonly referred to as delamination and occurs due to various reasons, such as corrosion and fatigue of the structure. Structural integrity can be enhanced by monitoring delamination and, ideally, to have a sensor that can continuously monitor the delamination extent. The delamination sensor proposed in this thesis (termed as Wireless Interlaminar Nano Sensor, or WINS) is an LC resonant circuit (resonant frequency fₛ = 1 MHz), and unlike prior sensors, is comprised solely of structural materials: a structural epoxy and carbon nanotubes (CNTs). The delamination crack causes a change in capacitance of the sensor leading to a change in its resonant frequency. The wireless sensor operation was demonstrated using an LC resonant circuit implemented on a printed circuit board, which is termed the sensor emulator (SE). A wireless sensing circuit and reader provided by Analog Devices Inc. was used for the initial measurements using the SE and later, the proof-of-concept (PoC WINS) devices. The PoC WINS device is a CNT-polymer nanocomposite based parallel plate capacitor, adhesively bonded between two composite laminates, and connected in parallel to the capacitor of the SE. The PoC WINS device was subjected to loading in the Mode-I configuration to induce delamination crack growth. The quality factor Q of the SE was varied (Q = 18, 3.2, 1.6, 0.8) by adding different external resistors, and a signal was acquired wirelessly for each value of Q as the delamination crack propagated. The wirelessly acquired signal was also sampled (sampling frequency Fₛ = 100 MHz) and analyzed to estimate the resonant frequency of the sensor. The effect of low sampling frequency was studied by downsampling of the acquired signal by a factor of 100. When Q was large (Q= 18), a change of∼2 kHz in the resonant frequency could be detected, corresponding to a change in capacitance of∼100 pF. At smaller values of Q∼1, challenges encountered in wireless signal acquisition were the too-rapid decay of the sensor signal and low signal-to-noise ratio (SNR). A wireless sensing circuit was designed and developed to enable signal acquisition at Q ≤1. The SE was used in the feedback system of a modified Armstrong oscillator (MAO) to obtain a sinusoidal signal of constant amplitude (∼1 V, SNR∼100 dB) even at Q = 0.8. The frequency (f_AO) of the signal wirelessly acquired from MAO is a non-linear function of the capacitance and the quality factor Q of the sensor and was observed to be in the range of 2 MHz. The MAO was tested for its performance using PoC WINS devices. It was observed that capturing the output signal for a duration of∼100 µs was sufficient for the accurate estimation of frequency (standard deviation∼3 Hz). At Q = 0.8 of the sensor, the MAO was able to detect a change in capacitance of 100 pF. To enable the use of low sampling rate (Fₛ = 1 MHz) for wireless signal acquisition, enhance the sensitivity of detecting change in capacitance, and provide direct readout of the change in capacitance of the sensor, the MAO was made part of another circuit termed MAO+. In the MAO+, mixer and filter circuits were used to modulate fAO from∼2 MHz to∼180 kHz and then to∼25 kHz, allowing the use of sampling frequency as low as 50 kHz to estimate the frequency. A phase-locked loop was made part of MAO+ which enabled direct readout of the change in capacitance of the sensor through a 4 1⁄2    digit digital display. The MAO+ was independently tested using PoC WINS devices and was able to detect a change in capacitance (at Q= 0.8 of the SE) of∼10 pF, corresponding to∼200 microns crack advance. This thesis presents the design, implementation, and operation of a wireless sensing circuit that allows signal acquisition at a low quality factor (Q ≤1) without compromising the SNR, demonstrating the first practical (wireless, made out of structural materials) delamination sensor for advanced composites.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159372</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Efficiency Soft-Switched Pulsed Plasma Bias Supply System</title>
<link>https://hdl.handle.net/1721.1/159371</link>
<description>High-Efficiency Soft-Switched Pulsed Plasma Bias Supply System
Estrin, Julia
Radio Frequency (RF) generators play a crucial role as bias voltage sources in plasma-enhanced semiconductor manufacturing processes. Employing pulsed waveforms to generate plasma offers significant improvements in manufacturing precision. However, producing these waveforms is challenging due to the need for high voltages (kilovolt range), high frequencies (hundreds of kilohertz to low megahertz), precise timing, and broadband frequency content. Traditional methods to generate these waveforms are limited by semiconductor voltage ratings, leading to either low-voltage waveforms or complex circuits to achieve higher pulse voltages. This work presents a simple, compact, and efficient method for generating a pulsed bias voltage for plasma processing. The approach involves synthesizing the pulsed waveform at a low, convenient voltage and then using a transformer to step up the voltage to the desired level. A low-leakage inductance coaxial cable-based transformer is developed to provide scaling with sufficient fidelity across a wide frequency range. Zero voltage switching (ZVS) is achieved on all devices, ensuring highly efficient operation. The proposed system is validated through a lab bench prototype that generates pulses of 2.1 kV at a frequency of 400 kHz. Additionally, this system allows for adjustments in pulse duty ratio and slew rate, offering enhanced control and versatility for various applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159371</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Evaluation of a Potentially Wearable Device for Circulating Cell Monitoring</title>
<link>https://hdl.handle.net/1721.1/159370</link>
<description>Development and Evaluation of a Potentially Wearable Device for Circulating Cell Monitoring
Jang, Kyuho
Monitoring circulating cells is crucial for assessing cancer metastasis and evaluating the efficacy of chimeric antigen receptor (CAR) T-cell therapies. Traditional blood-draw methods face challenges such as discontinuous monitoring and potential cell degradation, leading to inaccurate estimations. In vivo flow cytometry (IVFC), which measures real-time cellular response to laser illumination such as fluorescence, presents a viable alternative. However, its application in humans has been limited by the bulky design of existing devices and configurations unsuitable for larger organisms. This thesis introduces a novel, wearable fluorescence IVFC device tailored for human use, featuring a compact laser diode and silicon photomultiplier (SiPM) to enhance portability and functionality. The device includes a specialized optical system similar to a fluorescent microscope, which optimizes the signal- to-noise ratio by maximizing cellular fluorescence and minimizing background interference. Experimental determination of the limit of detection (LOD) for the SiPM and device establishes their detection capabilities and operational stability. Theoretical evaluations confirm that while the device can detect individual fluorescent cells in vitro, its current configuration does not support this sensitivity in vivo. The thesis also proposes strategies to improve the device’s sensitivity, aiming for reliable in vivo detection of single fluorescent cells.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159370</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Theory for Channel-less, Low-Pressure, Pressure-Compensating Drip Emitters</title>
<link>https://hdl.handle.net/1721.1/159369</link>
<description>Design Theory for Channel-less, Low-Pressure, Pressure-Compensating Drip Emitters
Howarth, Julia G.
Drip irrigation is a tool used to aid farmers in water scarce locations by offering a higher water efficiency than conventional methods. However, drip systems require larger energy input thus increasing capital and operating expenses posing a barrier to its adoption, specifically in lower-middle income countries (LMIC). Low-pressure drip irrigation (LPDI) has been proposed as a way to achieve water-efficient drip systems with reduced operating expenses. Another barrier to adoption is clogging within emitters that leads to increased maintenance costs and lifespan constraint of the system. To address this challenge, this thesis proposes a design theory for emitters that could be clog-resistant in addition to being low-pressure by removing the smallest hydraulic feature of conventional emitters, the channel. The designs proposed are ‘channel-less’ and replace the pressure-varying hydraulic resistance of the channel with hydraulic resistance stemming from offsetting the outlet of the emitter away from the center axis of the emitter pocket, which is where it is conventionally located. The design theory hypothesizes that as the flexible diaphragm deflects and begins to cover the offset outlet, the gap that flow is able to exit through decreases producing a hydraulic resistance that allows for a constant flow rate. To define this moment of activation, an analytical structural model is used to correlate diaphragm deflection with experimentally observed pressure-compensating (PC) regions for a series of emitters with varying lands depths and outlet positions. Experiments of nine channel-less emitters resulted in six emitters with PC capabilities and activation pressures as low as 0.4 bar and flow rates ranging from 1.3 - 1.8 L/hr. The knowledge of when activation occurs in a channel-less emitter will allow designers the ability to vary geometric parameters and create a low-pressure, channel-less (LPCL) emitter that reduces operating expenses and clogging related barriers against drip adoption.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159369</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator</title>
<link>https://hdl.handle.net/1721.1/159368</link>
<description>Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator
Lee, Giho
Despite the transformative advance in artificial intelligence (AI), the AI processing hardware have not matched the speed and power-efficiency requirement, restricting the realization of the full potential of AI and requiring innovation in AI hardware. Data transmission bottleneck between memory and processor has been pointed out as main source of poor computing speed and power efficiency. By embeding neural weights in hardware to minimize data transmission, non-volatile memory (NVM)-based in-memory computing have expected to have several orders of speed and power-efficiency boosts. However, its practical implementation as a next generation AI hardware has been not successful due to the non-idealities in NVMs including unstability, poor state resolution, challeng in programming, and systemon-a-chip (SoC) incompatibility. This thesis introduces ultra-accurate and ultra-robust geometrically programmed nano-resistor (GPNR) that can overcome NVM non-idealities and enable commercial AI accelerator based on analog in-memory computing. The state-of-theart 6-bit conductance state resolution and 8-bit stability of nano-resistor was realized by channel geometry optimization and thermodynamically stable material, while SoC imcompatible programming in NVM devices is omited. To evaluate the computing performance, experimental vector-matrix multiplication (NVM) operation were performed, showing 5-bit accuracy operation with 28x28 GPNR array without selectors. Finally, AI inference simulation was performed with simplifed 5x5 cropped MNIST digit image classification task. GPNR-based final classification layer demonstrates 91.0 % accuracy, comparable to the software limit of 93.2 %. The outcomes of this research not only bolster the feasibility of GPNR technology in practical applications but also highlight the potential for future advancements in AI accelerators that can fully harness the capabilities of analog in-memory computing.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159368</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polyanionizing rocksalt cathodes for lithium-ion batteries</title>
<link>https://hdl.handle.net/1721.1/159367</link>
<description>Polyanionizing rocksalt cathodes for lithium-ion batteries
Huang, Yimeng
Rocksalt-type and olivine polyanion-type cathodes are the two most dominant families of practical lithium-ion battery (LIB) cathodes. Rocksalt-type cathodes, including layered LiCoO₂, [chemical formula] (NCM), spinel LiMn₂O₄ and [chemical formula], and the later-developed disordered rocksalt cathodes (DRX), can have high energy densities up to 1000 Wh kg⁻¹ when utilizing hybrid anion-/cation- redox (HACR) under high upper cutoff voltages &gt; 4.5 V vs. Li/Li⁺. However, anion (oxygen) redox sacrifices cycling stability as oxidized oxide ions [chemical formula] are more mobile than O²⁻, which can lead to percolating lattice oxygen diffusion to the reactive particle surface, oxygen loss, and extensive side reactions with the electrolyte. Meanwhile, polyanion-type cathodes, represented by LiFePO₄, has excellent thermal and structural stability, due to strong covalent bonding in the PO₄ polyanion structure unit that improves structural integrity. But its application is limited by low energy density. While each of them has their own advantages, there was never a marriage between the two materials families for performance-safety synergy.&#13;
&#13;
To achieve high energy density with good cycling stability, we hybridize rocksalt and polyanion-type cathodes by introducing a new cathode family of polyanionized disordered rocksalt cathode with spinel order (DRXPS), where an optimal amount of XO₄ (X = P, S, etc.) polyanions are incorporated in Li-M-O (M = Mn, Fe) rocksalt cathodes, free of Co and Ni. We propose design rules to estimate the optimal polyanion amount, x, such that [XO₄]x is sufficient to suppress long-range percolation of lattice oxygen diffusion/loss at high voltages, while not harming capacity (XO₄ content is much less than that in LiFePO₄). The estimated optimal x by design is verified with electrochemical cycling data. Rules for Li/M/O ratio selection are also proposed and verified experimentally.&#13;
&#13;
The DRXPS cathode family, represented by [chemical formula], can deliver high initial discharge capacities and energy densities up to 367 mAh g⁻¹ and 1122 Wh kg⁻¹, respectively (among the highest for LIB cathodes to date). But most importantly, they can have &gt; 70% capacity and energy density retention after 100 cycles, far exceeding the cycling performance of un-polyanionized DRX with high energy densities. This addresses the cycling stability issues that are the bottleneck for DRX development. The DRXPS cathodes also demonstrate good rate performance and large compositional tunability (similar performance for compositions with varying M, X elements and u, v, x selection). In addition, we clarify their crystal structure, morphology, and redox mechanism via systematic characterizations.&#13;
&#13;
We believe that polyanionization should be a general strategy to address the poor high-voltage cyclability issue, which is a key challenge for earth-abundant cathodes. The superior performance demonstrated and the design rules proposed for DRXPS shed light on the future development of advanced sustainable cathodes.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159367</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proton Exchange Membrane Electrolysis Applied to the Dehydration of Cow Milk</title>
<link>https://hdl.handle.net/1721.1/159366</link>
<description>Proton Exchange Membrane Electrolysis Applied to the Dehydration of Cow Milk
Morice, Peter G.
The dehydration of cow milk to powder form extends product shelf life and reduces product shipping costs and emissions. However, the existing thermal methods commonly employed by the dairy industry produce harmful emissions in the combustion of fossil fuels. This work explores the potential role of an electrochemical alternative method of proton exchange membrane (PEM) electrolysis in the process of concentrating milk solids. Although the thermodynamic specific energy of electrolysis at [mathematical notation] is high compared to existing thermal methods around [mathematical notation], experimental results for PEM electrolysis assisted by mechanical centrifugation suggest a specific energy closer to [mathematical notation] is possible. The energy competitive PEM electrolysis method has the additional benefit of zero emissions when supplied by renewable energy sources. Analysis of milk solids processed by the electrolysis assisted method shows promising levels of high fat, mineral, and total protein content, with liquid chromatography quantifying both casein and whey protein types retained in the solid product.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159366</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A class of high-efficiency air-core power transformers with flux-guiding resonators</title>
<link>https://hdl.handle.net/1721.1/159365</link>
<description>A class of high-efficiency air-core power transformers with flux-guiding resonators
Salk, Noah J.
Developments in high frequency power semiconductors have enabled the miniaturization of power system components, leading to the reduction of heavy, lossy magnetic steel cores as a media for electromagnetic energy transfer. A final push towards fully&#13;
“air-core” power devices is underway and a new class of coreless transformers is under development at MIT which targets the cost-sensitive application of grid-tied renewable energy farms. The topology is composed of a primary coil, a secondary coil, and one or more nested resonant tanks that facilitate efficient multi-path energy transfer. This class of transformers presents opportunities for upfront cost savings via material reduction, and long-term cost savings via efficiency gains and the resulting reduction of lost profit. This work will examine the theory, modeling efforts, system-level considerations, and rigorous experimental validation necessary to compare the performance of these transformers with other topologies and establish industrial viability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159365</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design/System Technology Co-optimization of Gallium Nitride High Electron Mobility Transistors for Next-G 3DIC Heterogeneous Integration of Gallium Nitride and Si CMOS</title>
<link>https://hdl.handle.net/1721.1/159364</link>
<description>Design/System Technology Co-optimization of Gallium Nitride High Electron Mobility Transistors for Next-G 3DIC Heterogeneous Integration of Gallium Nitride and Si CMOS
Yadav, Pradyot Singh
With data rates pushing into the Tbps, there is an urgent need for the use of mmWave and subterahertz RF front ends and transistors. Gallium Nitride (GaN) transistors have continued to push the limits of high-power density, high frequency semiconductor devices. The future of GaN radio frequency (RF) circuit technology is at the intersection of device engineering, advanced packaging, and circuit design. Currently, these are three separate fields with little-to-no communication between them, resulting in critical limitations to today’s technology. These fields need to collaborate, crosspollinate, and intersect in order to modernize and advance innovation for the next generation of RF front ends. To design the most efficient W-G Band devices and systems, we must embrace a design/system-technology co-optimization (DTCO/STCO) approach, that combines innovative GaN transistors with engineered linearity, novel heterogeneous integration with state-of-the-art Silicon (Si) bias and control circuitry, and advanced physics-based modeling. This thesis presents the development of a 3DIC consisting of GaN HEMTs and Si CMOS BEOL, in particular W-band GaN HEMTs, Si CMOS BEOL circuits in Intel16, and advanced packaging of dielets.  The full chip continuum is investigated and innovated upon.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159364</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Al-Ni Nanofilm Powered Miniature Linear Actuator for Medical Devices</title>
<link>https://hdl.handle.net/1721.1/159363</link>
<description>Al-Ni Nanofilm Powered Miniature Linear Actuator for Medical Devices
Cotey, Samuel A.
Amedical device is sought to improve drug delivery options available to healthcare providers and patients; our initial focus is to develop a piston that can provide the power necessary to do an injection from an ingestible device. While many methods to administer drugs currently exist, the administration method in many cases is largely driven by factors that supersede ease, convenience, or comfort for the patient [1]. Many patients are saddled by cumbersome drug regimens that expose them to the risk of complex and painful drug administration paths and dependence on medical sharps [2, 3]. For these patients, being able to take injectable drugs orally allows them to use what appears to them to be simple, traditional drug delivery methods in lieu of injections that are painful and inconvenient. In order to perform an injection with a device that fits within an ingestible form factor, a novel piston is required. A concept design for an Al-Ni nanofilm powered miniature linear actuator has been developed in order to perform jet injections from within the gastrointestinal anatomy of a patient. This actuator consists of a small pressure vessel filled with liquid alcohol that undergoes a phase change to gas and generates pressure that can be used to cycle a piston in a drug loaded cylinder. Via exothermic reaction, nanofilm deposits thermal energy into the alcohol filled pressure vessel in order to generate the pressure needed to perform a jet injection. Cylindrical pressure vessel chambers with a diameter of 7mm and heights ranging from 3mm to 7.5mm were 3D printed and used to measure peak internal pressure vessel pressure as well as work output. The piston was used to push incompressible fluid through a nozzle in order to characterize the actuator’s work output. By using Bernoulli’s Equation, pressure on the piston head as a function of piston location along the stroke length was determined to characterize actuator performance as a function of pressure vessel size. The pressure vessel and the piston were modeled theoretically and empirically in order to identify the relevant design parameters so the piston can be effectively incorporated into the overall injection device.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159363</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Acoustic Expander: A New Expansion Machine for Use in Cryogenic Refrigeration</title>
<link>https://hdl.handle.net/1721.1/159362</link>
<description>The Acoustic Expander: A New Expansion Machine for Use in Cryogenic Refrigeration
Adams, Jacob
The acoustic expander is a new expansion machine with potential applications to cryogenic refrigeration and liquefaction. Cryogenic expansion machines produce mechanical energy from the expansion of a working fluid from high-pressure to low-pressure thereby cooling the low-pressure fluid for use in refrigeration. The novelty of the acoustic expander is that the mechanical energy is transferred through a gaseous acoustic wave, as opposed to a traditional piston- or turbo-expander where the mechanical energy is transferred through a solid piston or a spinning shaft. The acoustic expander is comprised of passive reed-valves that are coupled to a resonant cavity, much like a wind instrument. The working fluid enters and exits the resonant cavity through the oscillating reed-valves which drives a standing acoustic wave in the resonant cavity.  The acoustic wave carries mechanical energy from the low-temperature region to high-temperature region where the energy is then dissipated as heat to the ambient environment; this transfer of energy cools the low-pressure fluid as it exits the acoustic expander. The practical advantage of the acoustic expander over piston- or turbo-expanders is the absence of dynamic sliding seals or complex moving parts at cryogenic temperatures. &#13;
&#13;
This dissertation presents a first principles thermodynamic model of the acoustic expander and describes the behavior of the coupled reed-resonator system. The model reveals that flow blow-by through the reed-valves at large pressure differentials is a primary loss mechanism. Several proof-of-concept prototypes were constructed, with both a single reed-valve and a double reed-valve, demonstrating isentropic expansion efficiencies between 40% to 50% at pressure ratios up to 2.5. These experiments incorporated the acoustic expander with a recuperative heat exchanger, reaching temperatures of -62 C (211 K) with vacuum insulation and air working fluid. The cooling power of these prototypes is between 50 to 150 Watts at room temperature with a mass flow rate of 1 to 3 g/s. Instabilities that cause the acoustic expander to shut-off and reed-valve fatigue are identified as key challenges. Future work may address these challenges and integrate the acoustic expander into cryogenic cooling systems for the refrigeration of superconducting magnets or quantum computers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159362</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Droplet Based Microalgae Photobioreactor for Biofouling Prevention</title>
<link>https://hdl.handle.net/1721.1/159361</link>
<description>Droplet Based Microalgae Photobioreactor for Biofouling Prevention
Callan, Tess A.
Microalgae have a wide variety of applications aiding in sustainability, yet during the cultivation process, photobioreactor biofouling remains an issue. It blocks light from entering the reactor and necessitates reactor cleaning, ultimately reducing overall reactor productivity and increasing cultivation costs. Here we investigate a new type of reactor that removes the possibility of biofouling by growing the algae in aqueous droplets surrounded by oil that preferentially wets the reactor surface. We first look into growing the algae in droplets and discuss major parameters that will be impacted. Then, we show a droplet-based reactor that demonstrates the potential to scale the system with similar growth rates to industry. Finally, we investigate the impact on major costs to confirm the economic viability of transitioning to this reactor. Overall savings in the cultivation process, mainly from power reduction and biofouling prevention, are shown.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159361</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning through the Lens of Data</title>
<link>https://hdl.handle.net/1721.1/159360</link>
<description>Machine Learning through the Lens of Data
Park, Sung Min
Many critical challenges in machine learning—e.g., debugging model behavior or selecting good training data—require us to relate outputs of models back to the training data. The goal of predictive data attribution, the focus of this thesis, is to precisely characterize the resulting model behavior as a function of the training data in order to tackle these challenges. In the first part of this thesis, we introduce a framework, datamodeling, for formalizing and constructing effective methods for predictive data attribution. Despite the complexity of modern machine learning systems (e.g., end-to-end training of deep neural networks using stochastic gradient algorithms), we show that we can accurately predict model outputs from simple linear functions of the training data. We then demonstrate that these predictors—which we call datamodels—provide a versatile primitive for various tasks, ranging from predicting the effect of dataset counterfactuals to identifying brittle predictions. Next, to further improve the scalability of data attribution in this framework, we design a new method trak (Tracing with the Randomly-projected After Kernel) that is both effective and computationally tractable for large-scale, differentiable models. By leveraging a kernel approximation and other classic ideas from statistics and algorithm design, we are able to reduce the challenging problem of attributing the original DNN to that of attributing a simpler surrogate. We demonstrate the effectiveness of trak across various modalities and scales: image classifiers trained on ImageNet, vision-language models (CLIP), language models (BERT and mT5), and diffusion models. In the second part of this thesis, we explore applications of this framework developed in the first part: First, we leverage datamodels for the problem of learning algorithm comparison, where the goal is to detect differences between models trained with two different learning algorithms. Our algorithm, ModelDiff, enables us to automatically surface biases that distinguish different learning algorithms by differentiating how they use the same training data. Lastly, we tackle the challenging problem of machine unlearning, wherein the goal is to “unlearn” a small fraction of training data from a trained model. By leveraging the fact that datamodels can accurately approximate the “oracle” predictions, we design a simple finetuning algorithm that allows us to unlearn at a significantly smaller cost than prior methods.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159360</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston</title>
<link>https://hdl.handle.net/1721.1/158849.2</link>
<description>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston
Proman, Zachary D.
This development and business plan considers the neighborhood context and current market conditions characterizing the subject site’s redevelopment potential. The subject site, further defined in this thesis, is a prime parcel of land in the South Boston neighborhood of Boston, MA currently improved and used for quick-serve restaurant operations. Proximate to the Seaport, Fort Point, and Dorchester, South Boston is surrounded by demand drivers resulting in explosive growth that make it one of the most desirable and expensive housing submarkets in the entire City of Boston. Development considerations are fully defined in the report including zoning, equity, financial projections, ground lease, and market-level factors. A conclusion is made on the feasibility of the proposed project with recommendations for next steps resulting from the modeled base-case scenario. Market assumptions and any unresolved development issues are clearly identified and discussed.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158849.2</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analysis of requirements for a road-rail commuting system</title>
<link>https://hdl.handle.net/1721.1/159329</link>
<description>An analysis of requirements for a road-rail commuting system
Anderson, Ray,
            1918-
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1963; Includes bibliographical references (leaf 60).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159329</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of four independently-actuated, complementary-sized control valves and nozzle groups to improve steam turbine efficiency</title>
<link>https://hdl.handle.net/1721.1/159328</link>
<description>Optimization of four independently-actuated, complementary-sized control valves and nozzle groups to improve steam turbine efficiency
Yeaple, Thomas L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaf 79.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159328</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An office building for Charleston, South Carolina</title>
<link>https://hdl.handle.net/1721.1/159327</link>
<description>An office building for Charleston, South Carolina
Maybank, Joseph.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1958; Includes bibliographical references (leaf 21).
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159327</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Camus and the absurd</title>
<link>https://hdl.handle.net/1721.1/159326</link>
<description>Camus and the absurd
Dorn, Christopher Keith.
Thesis: B.S., Massachusetts Institute of Technology, Department. of Humanities, 1981; Includes bibliographical references (leaf 90).
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159326</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An experimental investigation of magnet geometry and hysteresis on simultaneous lift and guidance ferromagnetic suspensions</title>
<link>https://hdl.handle.net/1721.1/159325</link>
<description>An experimental investigation of magnet geometry and hysteresis on simultaneous lift and guidance ferromagnetic suspensions
Farley, Holt Leonard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159325</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of technological diffusion : the replacement of steam by diesel locomotives in the United States.</title>
<link>https://hdl.handle.net/1721.1/159324</link>
<description>A study of technological diffusion : the replacement of steam by diesel locomotives in the United States.
Hydell, Richard Paul.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1977; Vita.; Bibliography : leaves 306-308.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159324</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The impacts of labor-intensive roads in Colombia : a framework for analysis, research design and preliminary test.</title>
<link>https://hdl.handle.net/1721.1/159323</link>
<description>The impacts of labor-intensive roads in Colombia : a framework for analysis, research design and preliminary test.
Borrero Mutis, Santiago.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1978; Bibliography : leaves 150-153.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159323</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detection of products of molecular beam reactions by laser-induced fluorescence.</title>
<link>https://hdl.handle.net/1721.1/159322</link>
<description>Detection of products of molecular beam reactions by laser-induced fluorescence.
Silver, Joel Art.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159322</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A petrologic study of the Peabody granite stock</title>
<link>https://hdl.handle.net/1721.1/159321</link>
<description>A petrologic study of the Peabody granite stock
Pearce, J. Stewart.; Robinson, Burr A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1909
</description>
<pubDate>Fri, 01 Jan 1909 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159321</guid>
<dc:date>1909-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design of a cane sugar plant</title>
<link>https://hdl.handle.net/1721.1/159320</link>
<description>The design of a cane sugar plant
Pozas, Emilio.; Stefani, Luis.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1924; Includes bibliographical references (leaf 53).
</description>
<pubDate>Tue, 01 Jan 1924 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159320</guid>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a reinforced concrete theatre balcony cantilever originally of steel</title>
<link>https://hdl.handle.net/1721.1/159319</link>
<description>Design of a reinforced concrete theatre balcony cantilever originally of steel
Huang, Chia Jua.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architectural Engineering, 1927; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1927 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159319</guid>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>-A summer resort hotel- in Siam</title>
<link>https://hdl.handle.net/1721.1/159318</link>
<description>-A summer resort hotel- in Siam
Sobhit, Momluang.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938; Includes bibliographical references (leaf 48).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159318</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A seacoast fort</title>
<link>https://hdl.handle.net/1721.1/159317</link>
<description>A seacoast fort
Rockwell, Matthew L.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159317</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A ski centre at Cardigan, New Hampshire</title>
<link>https://hdl.handle.net/1721.1/159316</link>
<description>A ski centre at Cardigan, New Hampshire
Purcell, William F. H.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938; Includes bibliographical references (leaf 27).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159316</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new city hall for Boston</title>
<link>https://hdl.handle.net/1721.1/159315</link>
<description>A new city hall for Boston
Noonan, John J.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159315</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the existence of certain entire functions of zero type</title>
<link>https://hdl.handle.net/1721.1/159314</link>
<description>On the existence of certain entire functions of zero type
Scanlan, Robert H.
            (Robert Harris),
            1914-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1943; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1943 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159314</guid>
<dc:date>1943-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Winter weather types of the eastern North Pacific and adjacent coastal and island areas</title>
<link>https://hdl.handle.net/1721.1/159313</link>
<description>Winter weather types of the eastern North Pacific and adjacent coastal and island areas
Kosco, George Francis.; Dorsett, John O. F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1940; Includes bibliographical references (leaves [44]-[45]).
</description>
<pubDate>Mon, 01 Jan 1940 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159313</guid>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for a gothic church of five hundred sittings</title>
<link>https://hdl.handle.net/1721.1/159307</link>
<description>Design for a gothic church of five hundred sittings
LeBaron, F. N.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1897
</description>
<pubDate>Fri, 01 Jan 1897 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159307</guid>
<dc:date>1897-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Duty test on the centrifugal pumping unit, Springvale Pumping Station, Natick, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/159306</link>
<description>Duty test on the centrifugal pumping unit, Springvale Pumping Station, Natick, Massachusetts
Yin, Cho-Lan.; Sharabata, Ahmed Osman.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1928
</description>
<pubDate>Sun, 01 Jan 1928 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159306</guid>
<dc:date>1928-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The semi-simplicial free lie ring</title>
<link>https://hdl.handle.net/1721.1/159305</link>
<description>The semi-simplicial free lie ring
Schlesinger, James W.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1964; Vita.; Includes bibliographical references (leaf 29).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159305</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Olefins from amine oxides</title>
<link>https://hdl.handle.net/1721.1/159304</link>
<description>Olefins from amine oxides
Lebel, Norman A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1957; Vita.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159304</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication in the presence of noise</title>
<link>https://hdl.handle.net/1721.1/159303</link>
<description>Communication in the presence of noise
Schulman, Leonard J. Y.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1992; Includes bibliographical references (leaves 58-61).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159303</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aircraft leasing and airline corporate strategy</title>
<link>https://hdl.handle.net/1721.1/159302</link>
<description>Aircraft leasing and airline corporate strategy
Setyopurnomo, Rudy.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 130-131).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159302</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High energy scattering of helium by the molecular hydrogen isotopes</title>
<link>https://hdl.handle.net/1721.1/159301</link>
<description>High energy scattering of helium by the molecular hydrogen isotopes
Fowler, Michael Coolidge,
            1941-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1967; Vita.; Bibliography: leaves [168]-[171].
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159301</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Housing for the elderly in Tewksbury, Massachusetts.</title>
<link>https://hdl.handle.net/1721.1/159300</link>
<description>Housing for the elderly in Tewksbury, Massachusetts.
Roman, George Anthony.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1965; Bibliography: leaf 26.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159300</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An interactive statistics package for the social sciences.</title>
<link>https://hdl.handle.net/1721.1/159299</link>
<description>An interactive statistics package for the social sciences.
Lebling, Peter David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1973; Bibliography: leaf 92.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159299</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>International political behavior: historical analysis of Scandinavia and the Netherlands.</title>
<link>https://hdl.handle.net/1721.1/159298</link>
<description>International political behavior: historical analysis of Scandinavia and the Netherlands.
Deber, Raisa Rebecca Sarah Berlin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1971; Bibliography: leaves 176-185.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159298</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The political economy of trade : a computer simulation of contending regimes and the international division of labor</title>
<link>https://hdl.handle.net/1721.1/159297</link>
<description>The political economy of trade : a computer simulation of contending regimes and the international division of labor
Pollins, Brian.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1981; Bibliography: leaves 319-330.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159297</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of dietary protein deficiency during different stages of pregnancy on fetal development and maternal body composition and behavior</title>
<link>https://hdl.handle.net/1721.1/159296</link>
<description>Effects of dietary protein deficiency during different stages of pregnancy on fetal development and maternal body composition and behavior
Zartarian, Gary Michael.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1979; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159296</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An examination of private capital available to the railroad industry.</title>
<link>https://hdl.handle.net/1721.1/159295</link>
<description>An examination of private capital available to the railroad industry.
Wait, Barbara Rust.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1979; Bibliography: leaves 103-106.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159295</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation of the spatial organization of three-dimensional shapes for visual recognition.</title>
<link>https://hdl.handle.net/1721.1/159294</link>
<description>Representation of the spatial organization of three-dimensional shapes for visual recognition.
Nishihara, H. K.
            (Herbert Keith)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1978; Bibliography: p. 179-181.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159294</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A normative model for the railroad freight car acquisition planning process.</title>
<link>https://hdl.handle.net/1721.1/159293</link>
<description>A normative model for the railroad freight car acquisition planning process.
Burton, Philip Marc.
Thesis: Civ. E., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 240-249.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159293</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of pressure broadening theory to atmospheric microwave absorption.</title>
<link>https://hdl.handle.net/1721.1/159292</link>
<description>Application of pressure broadening theory to atmospheric microwave absorption.
Lam, Kai S.
            (Kai Shue),
            1949-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1976; Vita.; Bibliography: leaves 415-419.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159292</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A design of a club house for a country club</title>
<link>https://hdl.handle.net/1721.1/159291</link>
<description>A design of a club house for a country club
DeGolyer, Robert S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1898
</description>
<pubDate>Sat, 01 Jan 1898 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159291</guid>
<dc:date>1898-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of eddies on fCO₂ in the North Pacific surface ocean</title>
<link>https://hdl.handle.net/1721.1/159266</link>
<description>The effect of eddies on fCO₂ in the North Pacific surface ocean
Padalino, Christine
We investigate the impact of mesoscale eddies in the North Pacific on surface ocean fCO₂ using the in-situ measurements from the Surface Ocean CO₂ Atlas (SOCAT) to inform the importance of the mesoscale dynamics on global CO₂ fluxes. We sort SOCAT measurements from 2000-2019 by whether or not they are in an eddy, per- form basin scale analysis, and present case studies. The results show lower fCO₂ in both anticyclones and cyclones compared to the background ocean, with the mag- nitude of the anomaly varying seasonally and spatially. Due to the many potential mechanisms of the eddy impacts, we analyze a temperature normalized fCO₂ to tease apart the impact of altered temperature from a biological response or mixing. With this method, we find evidence that eddies are increasing the background biological activity. To further attempt to separate the different effects eddies could have on sur- face fCO₂ and CO₂ fluxes, we identify two long lived eddies with many measurements over their lifetimes to use as case studies. We find that both the anticyclonic and cyclonic eddy initially increase fCO₂, but at the end of the lifetime mixing likely plays a role in counteracting temperature effects. The investigation of the varying effects the mesoscale can have on CO₂ fluxes not only allows for a better understanding of how eddies will affect surface fCO₂ but also provides insight into the potential impact on global scale estimates. Our analysis shows that on average, while mesoscale eddies modulate surface ocean fCO₂, they do not have a detectable enhancement of the CO₂ flux in the North Pacific.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159266</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nothing Unwanted: Prototyping Matter out of Place</title>
<link>https://hdl.handle.net/1721.1/159265</link>
<description>Nothing Unwanted: Prototyping Matter out of Place
Wang, Yiqing
What we discard never truly disappears.  &#13;
&#13;
Accompanying the societal shift from post-war scarcity to a consumerist culture, the contemporary building industry relies on abundant virgin materials, machinery, and a global transportation network. Immersed in this culture of convenience, architecture has limited agency to engage responsibly and intimately with reclaimed materials. The design of waste, inevitably, often symbolizes the separation between society and its waste, marked by an intention to remove, re-form, and re-standardize. Zero-waste systems and circular economy often inadvertently create hidden wastes, labor, and carbon footprints, leading to an uneven distribution of environmental harms.  &#13;
&#13;
The thesis explores the unique materiality of municipal waste, linking human living with their unwanted with an architectural prototype. The new "unwanted" architecture integrates local waste into an adaptive inventory, avoiding over-precision, over-purification, and over-modularization. Based on the characteristics of US municipal waste, local-sourced garbage, including e-waste, plastics, wood, paper, metal, dust, and food waste, is studied, calibrated, and assembled to create building components and rooms. The bottom-up approach offers a way to compute heterogeneous materials with digital methods and low-tech on-site operations to minimize environmental impact. The richness of space blurs the boundaries between domesticity and abjection and between the sublime and the disgusting. &#13;
&#13;
The prototypes aim to rebuild both the Functional and Emotional Unwanted and re-imagine a scalable and operable building system. The design contrasts the previously visible waste in architectural design with today's invisible waste stream due to sophisticated waste management. It demonstrates an intimate approach to the gigantic amount of urban waste, emphasizing its cultural, personal, and collective dimensions.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159265</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Mechanocaloric Effects and Tunable Thermal Conductivity in Amorphous Elastic Polymer Fibers</title>
<link>https://hdl.handle.net/1721.1/159264</link>
<description>Engineering Mechanocaloric Effects and Tunable Thermal Conductivity in Amorphous Elastic Polymer Fibers
Li, Buxuan
Energy-efficient clean technologies for active heating/cooling and passive thermal regulation are in high demand for applications spanning different scales, from city planning and building heating/cooling to wearable and portable devices to miniature electronics. To advance relevant technologies and simultaneously lower their  environmental footprint, two key research and engineering questions await to be answered: (1) how to pump/convert energy between thermal and other forms more efficiently, and (2) how to transport thermal energy in a tunable and scalable way for dissipation/insulation applications.&#13;
&#13;
Properly-engineered polymer materials may provide solutions to both challenges in a synergistic way. Among all materials, polymers stand out by several figures of merit, including low cost, chemical inertness, ease of manufacture and scalability, and light weight. They can be engineered by the application of temperature and/or strain, which impose different molecular arrangements within the material, enabling control over the degree of crystallinity, chain entanglement, and the dominant chain orientation. When polymers undergo microscopic structural changes, they may exhibit temperature responses driven by their internal entropy changes, known as mechanocaloric (mC) effects. mC effects offer a venue of conversion between mechanical and thermal forms of energy. Polymer chain alignment, on the other hand, also has a strong effect on the vibration characteristics of polymers, and thus on their thermal conductivity (TC) values. Through a continuous strain-temperature engineering of elastic amorphous polymer fibers, we demonstrate unique opportunities to address both challenges in energy conversion and transfer.&#13;
&#13;
We developed elastic fibers, which are melt spun from an olefin block copolymer (OBC), and exhibit (1) competitive mC performance with the temperature change exceeding 5K and the material coefficient of performance (COP) larger than 10, and (2) reversible thermal conductivity, which is continuously tunable in the range from 1.2 to 2.5 W/mK via uniaxial strain  deformations. The entanglement-enabled elasticity of the cross-linker-free block co-polymer chosen for this research allows the fibers to survive thousands of loading-release stretching cycles. In striking contrast with the vulcanized rubber commonly used as an efficient mC material, the OBC is a thermoplastic with a relatively low melting temperature (&lt;120C), which can be easily recycled and molded into different geometries. By optimizing both the fabrication parameters and the operational scenarios, we demonstrated high potential of elastic OBC fibers in advanced thermal applications within a wide temperature window from -20C to 70C. We further analyzed structural changes, thermodynamics, and vibration spectra of OBC fibers under different strains and temperatures, elucidating the mechanisms underlying the observed phenomena. This study provides insights into sustainable engineering and optimization of polymer-based solid-state refrigerators, heat pumps and tunable materials for efficient energy dissipation and passive thermoregulation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159264</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mobile Multi-Bounce LiDAR</title>
<link>https://hdl.handle.net/1721.1/159263</link>
<description>Mobile Multi-Bounce LiDAR
Somasundaram, Siddharth
Single-photon avalanche diodes (SPADs) are emerging sensors that can measure the propagation of light in a scene, capturing higher-order reflections, shadows, and light transport that ordinary cameras are unable to. Measurement of these multi-bounce light paths is especially useful for non-line-of-sight (NLOS) imaging. The increasing availability of SPAD sensors on mobile devices (e.g. iPhone Pro LiDAR) raises the potential to enable NLOS capabilities on consumer devices in the future. Currently, these sensors are primarily employed for LiDAR-based depth estimation, with untapped potential in other applications. In light of recent advances in SPAD device development, the timing is opportune to revisit the applicability of multi-bounce LiDAR techniques on consumer-grade mobile devices.&#13;
&#13;
This thesis extends the applicability of multi-bounce LiDAR techniques from research-grade SPAD hardware to consumer-grade mobile LiDARs. First, we enable single-shot capture of two-bounce signals and remove the need for laser scanning by developing a tomographic formulation for two-bounce non-line-of-sight imaging. Second, we enable real-time non-line-of-sight capture at eye-safe laser power under object and camera motion. Our approach is inspired by principles from burst photography. &#13;
&#13;
We implement and evaluate the proposed algorithms in simulations and on experimental SPAD hardware. We also demonstrate real-time non-line-of-sight tracking on a consumer-grade smartphone LiDAR. Potential future applications of our results include "X-ray vision" in AR/VR, full-body tracking for AR headsets, room scanning for hard-to-reach areas, collision avoidance for autonomous vehicles, and robotic navigation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159263</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Language Representations in the Human Mind and Brain</title>
<link>https://hdl.handle.net/1721.1/159208</link>
<description>Characterizing Language Representations in the Human Mind and Brain
Tuckute, Greta
Language allows for the mapping of speech signals or written characters to meaning every time we engage in conversation or read. How can biological tissue, our brains, support this mapping process? This thesis characterizes the neural representations that enable humans to infer the meaning of a sentence. &#13;
My work builds on the foundation that regions in the left frontal and temporal parts of the brain causally and selectively support language processing (the ‘language network’). Chapter 2 asks how the language network develops. Through a case study of an individual born without their left temporal lobe (but with neurotypical language abilities), I demonstrate that the presence of temporal language regions appears to be necessary for the development of ipsilateral frontal regions, which echoes evidence from aphasia that the temporal areas are more important for language function. Chapters 3-5 aim to understand the representations and computations that mediate language comprehension. Traditionally, this line of inquiry has been challenging given the limited utility of probing animal models whose communication systems differ substantially from human language. However, the recent advent of artificial language models (LMs) has demonstrated that a system other than the human brain is capable of generating fluent and coherent text. Chapter 3 introduces the use of LMs as model systems for studying neural representations of language. I ask what aspects of an LM’s representation of the linguistic input matter the most for model-to-brain similarity. Across a series of systematic comparisons, I show that meanings of content words, such as nouns and verbs, matter more than syntactic structure (e.g., word order and function words). In Chapter 4, I leverage this model-to-brain similarity to ask what kinds of linguistic input the human language regions are most responsive to. I use an LM to identify sentences that maximally drive or suppress activity in language regions, and I demonstrate that these regions respond most strongly to sentences that are sufficiently linguistically well-formed but unpredictable in their structure or meaning, suggesting that this network is tuned to input predictability in the service of efficient meaning extraction. Finally, in Chapter 5, I use high-field (7T) fMRI to search for the organizing dimensions of the language network. By performing a data-driven decomposition of neural responses to linguistically diverse sentences, I show that only two components—shared across individuals—emerged robustly, accounting for about 34% of the explainable variance. In line with work in Chapter 4, the first component appears to correspond to processing difficulty. The second component appears to correspond to meaning abstractness. Both components are distributed across frontal and temporal brain areas but show systematic topographies across participants. &#13;
Altogether, this thesis provides a detailed characterization—across thousands of sentences and through spatially-precise neural measurements—of how the fronto-temporal language network supports language comprehension. This work brings us closer to deciphering the circuits and mechanisms that underlie the astonishing human capacity to infer complex meanings through language.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159208</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spontaneous activity in the mouse visual cortical slice: biophysical characterization and pathophysiology</title>
<link>https://hdl.handle.net/1721.1/159207</link>
<description>Spontaneous activity in the mouse visual cortical slice: biophysical characterization and pathophysiology
Heinrich, Maxwell John
As we continue to await the first disease-modifying treatment for Fragile X Syndrome (FXS), the leading inherited cause of intellectual disability, the search continues for novel ways to address the core pathophysiology of this neurodevelopmental disorder. FXS is caused by silencing of the FMR1 gene, which results in the loss of fragile X messenger ribonucleoprotein (FMRP), a critical protein in regulating nervous system development and neural circuit function. Due to the loss of FMRP’s canonical role in inhibiting mRNA translation, an elevated rate of protein synthesis is widely recognized as a core feature of FXS pathophysiology. In this thesis, I present my investigation of a relatively understudied form of pathophysiology that arises in brain slices prepared from a mouse model of FXS, the Fmr1-knockout (KO) mouse. Relative to wildtype (WT) slices, Fmr1-KO visual cortical slices exhibit increased spiking activity in layer 5. Critically, this hyperactivity phenotype is rapidly reversed not only by treatments known to restore elevated rates of protein synthesis to WT levels, but also by the protein synthesis inhibitor cycloheximide. Therefore, rapidly turned over pathogenic proteins are suspected to actively maintain this form of pathophysiology. Identifying these pathogenic proteins could reveal novel therapeutic targets for the treatment of FXS. Progress requires a deeper understanding not only of the cellular pathophysiology supporting this hyperactivity, but also of the biophysical mechanisms driving the activity itself, as each remains relatively unexplored. In Chapter 1, I review relevant FXS pathophysiology and our understanding of the various forms of spontaneous activity generation in neocortical brain slices. In Chapter 2, I dive into the biophysical mechanisms underlying the sparse, spontaneous spiking activity generated in WT visual cortical slices. Here, I find extreme sensitivity to the ionic composition of the artificial cerebral spinal fluid (aCSF) bathing the slices. Lower, more physiologic concentrations of extracellular divalent cations render extratelencephalic layer 5 pyramidal neurons intrinsically active by altering the activity of the persistent sodium current. In Chapter 3, I detail my journey investigating the pathophysiology underlying the hyperactivity phenotype in Fmr1-KO mice. While my early investigations indicated that depolarized intratelencephalic layer 5 pyramidal neurons drive hyperactivity of the layer 5 circuit, this intracellular phenotype proved to be ephemeral and likely due to suboptimal slice conditions. Informed by the investigations of Chapter 2, I conclude that the hyperactivity phenotype is not driven by cell-intrinsic hyperexcitability of layer 5 pyramidal neurons. My optimization of slice conditions preserves the hyperactivity phenotype, setting the stage for future intracellular investigation of the cause of this pathophysiology. In Chapter 4, I describe the implications of my work for understanding activity generation in neocortex and provide direction for future studies of spontaneous activity in WT and Fmr1-KO visual cortical slices.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159207</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward the Understanding of Brain’s Molecular Language</title>
<link>https://hdl.handle.net/1721.1/159206</link>
<description>Toward the Understanding of Brain’s Molecular Language
Zoghi Tavana, Sara
What underlies the extraordinary capacity of neurons to process information, form memories, and orchestrate complex behaviors? Over a century of research has established that proteins are the central functional molecules of the cell, yet translating this knowledge into an understanding of emergent neural phenomena and effective treatments for neurological disorders remains elusive. We argue that this paradox stems from studying proteins in isolation, overlooking how their function is fundamentally shaped by spatial context and interactions with DNA, RNA, other proteins, lipids, carbohydrates, and metabolites. This coordinated&#13;
molecular interplay, we posit, ultimately gives rise to the complex neural circuits and behaviors observed in higher organisms. Intriguingly, Alfred Binet foreshadowed this perspective as early&#13;
as 1889 when he suggested that even simple, single-celled organisms—lacking anatomically defined nervous systems—might harbor a "diffuse nervous system" of molecular interactions&#13;
within their cytoplasm enabling complex behaviors. However, the historical progression of neuroscience, largely dictated by available methodologies and oscillating between siloed reductionist molecular approaches and systems-level analyses, has not yet been able to fully capture this intricate molecular choreography underlying neural function. In this review, we examine how studying molecular species in isolation, while yielding important insights, has ultimately proven insufficient for understanding emergent neural functions. We propose that recent technological advances in expansion microscopy, molecular anchoring, machine learning-enabled&#13;
protein detection, and cryo-fixation now make it possible to map molecular networks in their native context. This integrative approach promises to illuminate the molecular "language" of the brain, shedding light on how collective interactions among biomolecules&#13;
give rise to neuronal emergent abilities—and guide future therapeutic innovations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159206</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models and Tools for Studying Infants’ Attention</title>
<link>https://hdl.handle.net/1721.1/159205</link>
<description>Models and Tools for Studying Infants’ Attention
Raz, Gal
From birth, infants actively control where they look, long before they gain any significant motor control over other body parts. This early emergence of attentional preferences has allowed psychologists to use infants' gaze to gain insight into the developmental origins of perception and cognition. Understanding infant gaze therefore is critical both for understanding early development, and for interpreting decades of literature in developmental psychology. This thesis studies the functions of infants' looking behavior, and introduces novel tools to accelerate its study. Chapter 1 is a theoretical review which challenges the notion that learning in infancy is primarily incidental and passive. I outline ways in which infants use their gaze to learn, as well as form and manage social relationships. Chapter 2 demonstrates that, indeed, infants' looking behavior is better understood as an active sampling process. I describe a computational model that posits that infants' gaze is optimized to maximize expected information gain from noisy perceptual input, and show through large-scale behavioral experiments that infant looking is well described by this model. Chapter 3 then confronts the methodological challenges of studying infant gaze empirically: to obtain and process data from a single infant in an infant looking time experiment takes about 2 hours per infant. I describe a workflow in which we reduce this time to about 5 minutes per infants by a) using asynchronous, instead of in-lab, testing, b) training parents, rather than experimenters, to control the flow of experiments, and c) replacing manual gaze coding with automatic annotation using modern computer vision tools. Finally, synthesizing the preceding chapters, Chapter 4 describes outstanding challenges for the empirical and computational study of infant attention.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159205</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural correlates of trait mindfulness</title>
<link>https://hdl.handle.net/1721.1/159204</link>
<description>Neural correlates of trait mindfulness
Treves, Isaac N.
There is a clear and present need to identify the brain biological bases of mental health and mental illness. In this thesis, I focus on the brain bases of trait mindfulness, measured using self-report. Mindful individuals pay attention to the present moment and bring an attitude of acceptance and non-judgement to their thoughts and feelings. Despite the well-established importance of trait mindfulness to well-being, there are no well-established brain measures of trait mindfulness. This may be because of methodological obstacles to brain-behavior association studies. In this thesis, I evaluated the significance of these obstacles in the field and addressed them empirically. In Chapter 2, I conducted a systematic review of 68 brain imaging studies of trait mindfulness. There were some commonalities, but also large gaps in the literature. Sample sizes were small, and studies focused on single regions, networks or EEG responses. There was a lack of research on self-awareness and body awareness, important components of mindfulness. In the following chapters, I conducted three fMRI studies using large existing datasets and rigorous methodology to elucidate brain-mindfulness associations. In Chapter 3, I conducted connectome predictive modelling with the largest sample of any lab-based neuroimaging study of mindfulness (n = 367 adults). I found whole-brain network models of attention and non-judgement components of mindfulness that generalized to one of two held-out datasets. The models incorporated default-mode, somatomotor, and visual networks. Overall mindfulness scores were not predictable, suggesting challenges to a single brain marker of mindfulness. In Chapter 4, I analyzed a dataset of resting-state fMRI in adolescents, conducting dynamic connectivity analyses to find time-varying brain states. I selected brain states that showed good test-retest reliability, and these brain states correlated with mindfulness. Interestingly, one brain state exhibited global hyperconnectivity, perhaps a marker of arousal or awareness. Finally, in Chapter 5, I questioned whether using a task involving breath-counting would bolster correlations with trait mindfulness. I found differential brain responses to the task vs resting-state, but the responses did not correlate with trait mindfulness. Together, these studies contribute to an emerging picture of the mindful brain as being reflected in both unimodal (e.g. VIS) and heteromodal (e.g. DMN) brain networks, as well as static and dynamic functional organization. However, results underscore the difficulty of finding generalizable correlates across samples, conditions, and mindfulness scales.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159204</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Postnatal specialization of astrocyte regional heterogeneity in the mammalian brain and improved tools for studying glia</title>
<link>https://hdl.handle.net/1721.1/159203</link>
<description>Postnatal specialization of astrocyte regional heterogeneity in the mammalian brain and improved tools for studying glia
Schroeder, Margaret E.
Astrocytes are an abundant class of glial cells with critical roles in neural circuit assembly and function. Though many studies have uncovered significant molecular distinctions between astrocytes from different brain regions, the developmental trajectory of this regional heterogeneity requires further systematic study. Chapter 1 of this thesis provides a detailed literature review on the development of astrocyte regional heterogeneity. To address existing knowledge gaps, we used single-nucleus RNA sequencing to characterize the molecular diversity of brain cells across six developmental stages and four brain regions in the mouse and marmoset brain (Chapter 2). Using this transcriptomic atlas, we show that astrocyte regional specialization is shaped by postnatal development in both species, with significant species divergence in astrocyte gene expression signatures (Chapter 3). In Chapter 4, we report multiplexed expansion revealing (multiExR), a technique that can be used to visualize 20 or more proteins at nanoscale resolution in the same tissue sample. Finally, in Chapter 5, we describe the generation and characterization of Gpr17-Cre, a novel Cre recombinase driver line sensitive and specific for the oligodendrocyte lineage and a subset of astrocytes in the central nervous system.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159203</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated and Provable Privatization for Black-Box Processing</title>
<link>https://hdl.handle.net/1721.1/159202</link>
<description>Automated and Provable Privatization for Black-Box Processing
Xiao, Hanshen
This thesis initiates a study on universal leakage quantification and automated privacy-preserving solutions. To minimize assumptions on leakage generation and symbiotically accommodate cutting-edge advances in both algorithms and their implementations, a framework is established that models leakage as the output of a black-box processing function and produces rigorous privacy analysis based entirely on end-to-end simulation. At a high level, we demonstrate the following results: Given access to the underlying black-box secret generation, through mechanized evaluations of the black-box processing function, the hardness of adversarial inference can be provably quantified and controlled through properly selected perturbations. The detailed contributions can be summarized from three perspectives: a). Privacy Definition: We propose a new and semantic notion, called ProbablyApproximately-Correct (PAC) Privacy. This concept describes privacy intuitively as an impossible inference task for a computationally-unbounded adversary and supports expression of a universal privacy concern that is accessible to a general audience. b). Black-Box Leakage Quantification: We introduce randomization optimization and noise smoothing tricks and develop a set of information-theoretical tools based on f-divergence to characterize privacy risk through a statistical mean estimation. Provided sufficient sampling, one can approach this objective risk bound arbitrarily closely, which thus leads to a high confidence proof. The established theory also connects algorithmic stability and generalization error, demonstrating win-win situations in machine learning that simultaneously improve PAC Privacy and learning performance. c). Automated Privacy-Preserving Solutions: Theoretically, we characterize the tradeoff between required privacy guarantees (privacy budget), approximation error of the optimal perturbation strategy (utility loss), and simulation budget (computation power) to automatically construct a perturbation-based privacy solution from black-box evaluations. Operationally, we establish a series of tools to efficiently optimize the noise distribution in high-dimensional or constrained support spaces, and study their online versions with adversarially-adaptive composition. Concrete applications are presented, ranging from formal privacy proof for heuristic obfuscations, to privacy-preserving statistical learning, to response privacy in deep learning with vision models and large language models (LLM), such as ResNet and GPT-2, and hardware security, such as side-channel cache-timing leakage control.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159202</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimum route location in an underdeveloped area</title>
<link>https://hdl.handle.net/1721.1/159201</link>
<description>Optimum route location in an underdeveloped area
Burke, James Eugene,
            1920-
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1963; Includes bibliographical references (leaf 24).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159201</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An undergraduate dormitory for Massachusetts Institute of Technology</title>
<link>https://hdl.handle.net/1721.1/159200</link>
<description>An undergraduate dormitory for Massachusetts Institute of Technology
Warbuton, Ralph.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1958; Includes bibliographical references (leaves 34-35).
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159200</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ripon 1970, an academic redevelopment program</title>
<link>https://hdl.handle.net/1721.1/159199</link>
<description>Ripon 1970, an academic redevelopment program
Linde, Richard P.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1958; Includes bibliographical references (leaf 22).
</description>
<pubDate>Wed, 01 Jan 1958 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159199</guid>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanism of hydrolysis of triphenylsilyl fluorides in water-acetone solutions and the mechanism of decarboxylation of β-ketoacids in water, benzene, and hexane</title>
<link>https://hdl.handle.net/1721.1/159198</link>
<description>The mechanism of hydrolysis of triphenylsilyl fluorides in water-acetone solutions and the mechanism of decarboxylation of β-ketoacids in water, benzene, and hexane
Esteve Campderá, Ramón María.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1951; Vita.
</description>
<pubDate>Mon, 01 Jan 1951 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159198</guid>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some methods of securing more effective selling through manufacturers' agents</title>
<link>https://hdl.handle.net/1721.1/159197</link>
<description>Some methods of securing more effective selling through manufacturers' agents
Jones, William R.; Lannamann, Robert J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1949; Bibliography: leaves 127-128.
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159197</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capillary phenomena in cohesionless soils</title>
<link>https://hdl.handle.net/1721.1/159196</link>
<description>Capillary phenomena in cohesionless soils
Lambe, T. William.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1948; Includes bibliographical references (leaves 186-188).
</description>
<pubDate>Thu, 01 Jan 1948 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159196</guid>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanism of hydrolysis of triphenylsilyl fluoride</title>
<link>https://hdl.handle.net/1721.1/159195</link>
<description>Mechanism of hydrolysis of triphenylsilyl fluoride
Esteve Campderá, Ramón María.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1948; Includes bibliographical references (leaves 32-33).
</description>
<pubDate>Thu, 01 Jan 1948 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159195</guid>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extractive and azeotropic distillation</title>
<link>https://hdl.handle.net/1721.1/159194</link>
<description>Extractive and azeotropic distillation
Hughes, Richard R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1947; Bibliography: leaves 94-95.
</description>
<pubDate>Wed, 01 Jan 1947 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159194</guid>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling, off-design performance and control analysis of OTEC power plants</title>
<link>https://hdl.handle.net/1721.1/159193</link>
<description>Modelling, off-design performance and control analysis of OTEC power plants
Calvo Sotelo, José,
            1893-1936.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Ocean Engineering, 1981; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159193</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design criteria for a medium-powered, dynamically converted radioisotopic power generator for terrestrial use.</title>
<link>https://hdl.handle.net/1721.1/159192</link>
<description>Design criteria for a medium-powered, dynamically converted radioisotopic power generator for terrestrial use.
Esser, Peter D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159192</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic properties of samarium-cobalt and cobalt-platinum.</title>
<link>https://hdl.handle.net/1721.1/159191</link>
<description>Magnetic properties of samarium-cobalt and cobalt-platinum.
Ralph, Mark Jonathan.
Thesis: B.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159191</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Railroad reliability and freight car utilization : an assigned fleet model.</title>
<link>https://hdl.handle.net/1721.1/159190</link>
<description>Railroad reliability and freight car utilization : an assigned fleet model.
Assarabowski, Richard John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 132-133.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159190</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of type H transformers</title>
<link>https://hdl.handle.net/1721.1/159189</link>
<description>An investigation of type H transformers
Potter, A. A.
            (Andrey Abraham),
            1882-1979.; Obear, George Barrows.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1903
</description>
<pubDate>Thu, 01 Jan 1903 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159189</guid>
<dc:date>1903-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for the main building of a day school for girls</title>
<link>https://hdl.handle.net/1721.1/159188</link>
<description>Design for the main building of a day school for girls
Pattee, Elizabeth G.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1916
</description>
<pubDate>Sat, 01 Jan 1916 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159188</guid>
<dc:date>1916-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new approach to the marketing of railroad services</title>
<link>https://hdl.handle.net/1721.1/159187</link>
<description>A new approach to the marketing of railroad services
Sharp, W. Bennett.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1936; Includes bibliographical references (leaves 120-121).
</description>
<pubDate>Wed, 01 Jan 1936 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159187</guid>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a flexible system for roadside restaurants</title>
<link>https://hdl.handle.net/1721.1/159186</link>
<description>Design of a flexible system for roadside restaurants
Weese, Harry M.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159186</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A beauty establishment</title>
<link>https://hdl.handle.net/1721.1/159185</link>
<description>A beauty establishment
Thompson, Polly Povey.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938; Includes bibliographical references (leaf 18).
</description>
<pubDate>Sat, 01 Jan 1938 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159185</guid>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brewing Resilience: A Case Study in Adapting Small Business Strategy with Systems Thinking</title>
<link>https://hdl.handle.net/1721.1/159152</link>
<description>Brewing Resilience: A Case Study in Adapting Small Business Strategy with Systems Thinking
Jones, Andrew C.
This thesis explores how systems thinking—a methodology often reserved for large organizations—can be effectively applied to small businesses facing complex challenges. Using Lamplighter Brewing Co., an independent microbrewery in Cambridge, Massachusetts, as a case study, the research examines how the brewery adapted to the disruptions of the COVID-19 pandemic and the evolving economic landscape that followed. It documents the iterative application of systems thinking principles to identify root causes, leverage points, and actionable solutions to address issues such as declining revenue, rising costs, and misaligned organizational structures.&#13;
Lamplighter's interventions ranged from restructuring its management and marketing teams to pivoting its sales and production strategies. By leveraging tools such as causal loop diagrams and stock-and-flow models, the brewery uncovered systemic dynamics driving its performance. The research highlights the importance of iterative learning, targeted interventions, and holistic analysis in fostering resilience and sustainability in resource-constrained environments.&#13;
While focused on the craft brewing industry, the findings offer transferable insights for small businesses in similarly dynamic sectors, demonstrating that systems thinking can empower smaller organizations to navigate complexity, adapt strategically, and thrive amidst uncertainty.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159152</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Analysis of Plant Responses to Drought</title>
<link>https://hdl.handle.net/1721.1/159151</link>
<description>Systems Analysis of Plant Responses to Drought
Yun, Jie
Understanding how plants respond to environmental stress is critical for ensuring stable crop performance and predicting how natural populations may adapt to a changing climate. While plant biology has traditionally focused on plant physiology and molecular biology of model plants to elucidate plant responses, there is immense diversity in how plants respond to environmental conditions, arising from complex genotype-by-environment interactions (GxE).  &#13;
This dissertation investigates these themes, aiming to advance our understanding of the mechanisms driving plant responses to environmental stress and providing insights for improving agricultural resilience and sustainability, as well as contributing to evolutionary biology. This thesis focuses on three projects: &#13;
(1) While GxE is widely observed in traits and gene expression patterns, the mechanisms driving these interactions remain unclear. This thesis will present a framework using casual inference to study GxE interactions in gene regulatory networks to uncover the molecular mechanisms driving diverse environmental responses. We study two genotypes of the model grass species Brachypodium distachyon, leveraging natural variation and RNA-sequencing to study their responses to drought stress. &#13;
(2) Natural perturbations can be used to understand complex traits. In wild species, limited resources drive allocation strategies that balance trade-offs between survival risks and fitness benefits, which is central to their ecology. This thesis particularly focuses on understanding a whole plant trait – carbon allocation – using divergent responses of annual and perennial species of Brachypodium to drought stress. &#13;
(3) Does domestication trade-off stress tolerance for rapid growth? Plant domestication is thought to create trade-offs between high yield and stress tolerance, raising concerns about yield stability in future climates. This thesis will present a high-throughput phenotyping approach to study this question, focusing on leaf growth environmental response and its cellular regulatory mechanisms.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159151</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning-Based Classification of Phonotraumatic Vocal Hyperfunction Severity from Stroboscopic Images</title>
<link>https://hdl.handle.net/1721.1/159150</link>
<description>Deep Learning-Based Classification of Phonotraumatic Vocal Hyperfunction Severity from Stroboscopic Images
Balaji, Purvaja
Phonotraumatic vocal hyperfunction (PVH) is a vocal disorder characterized by damaged vocal folds from excessive or abusive voice use. Clinical assessment of PVH relies on timeconsuming videostroboscopy examination, which poses challenges for large-scale clinical studies. We address the need for more efficient clinical assessment tools by proposing deep learning approaches for automatically detecting PVH severity from stroboscopic images. One of the main challenges in building deep learning models for this task is a lack of labeled stroboscopy data. Motivated by this challenge, we explore two approaches: direct classification and segmentation-then-classification. In the segmentation-then-classification approach, we first train a model to segment the glottis, a clinically relevant part of the vocal fold anatomy. Then, we use the predicted segmentation along with the stroboscopic image as inputs into a classification model. This approach helps to guide the model towards key anatomical features. We achieve up to 0.53 accuracy in four-class PVH severity prediction with the direct classification approach. Incorporating glottal segmentations improves the accuracy to 0.64, underscoring the value of providing anatomically-informed segmentations when assessing PVH severity. By creating an automated PVH severity tool, our work has the potential to help clinicians more efficiently monitor disease progression and to facilitate large-scale screening, thereby contributing to improved patient care.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159150</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intuitive Audio Interaction and Control in Multi-Source Environments</title>
<link>https://hdl.handle.net/1721.1/159149</link>
<description>Intuitive Audio Interaction and Control in Multi-Source Environments
Oduniyi, Erick O.
In an increasingly noisy world, managing auditory focus is a persistent challenge. This thesis explores how embodied interactions—primarily head tracking, alongside experiments with gaze tracking, speech commands, and audio-visual segmentation—can enhance user control over complex auditory environments. By linking head orientation to volume adjustments, we investigated whether natural, instinctive movements could serve as intuitive, hands-free mechanisms for isolating and amplifying relevant sounds. User studies revealed that head tracking is effective in structured audio contexts, such as music, where distinct sources are easily separable. However, its utility diminishes in dense, overlapping conversations, highlighting the need for finer control mechanisms. While gaze and segmentation offer promising refinements, cognitive load and system responsiveness remain key challenges. These findings underscore that embodied audio interaction must be adaptive, content-aware, and seamlessly integrated with user intent.This research contributes to human-computer interaction by demonstrating both the potential and limitations of movement-based audio control. Future work should refine multimodal fusion, improve segmentation accuracy, and enhance accessibility to create systems that dynamically respond to users’ natural behaviors—reducing cognitive strain and enabling more fluid, user-centric auditory experiences.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159149</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Effects of Pharmaceutical Interventions, Social Policies, and Exogeneous Shocks on People's Health and Behavior</title>
<link>https://hdl.handle.net/1721.1/159148</link>
<description>Evaluating the Effects of Pharmaceutical Interventions, Social Policies, and Exogeneous Shocks on People's Health and Behavior
Charpignon, Marie-Laure
Aging individuals tend to suffer from chronic conditions, some of which manifest in midlife (e.g., type 2 diabetes and hypertension) and some later (e.g., neurodegenerative disorders). As the global population increases and as people are living longer, finding strategies to prevent or delay these diseases has become a key priority. Concurrent advances in public health and biomedicine offer an array of pharmaceutical (e.g., oral drugs, vaccines) and non-pharmaceutical solutions (e.g., preventative and behavioral health measures). Meanwhile, exogenous shocks such as pandemics also affect the health and well-being of aging and other vulnerable individuals or populations (e.g., immunocompromised individuals, multigenerational households). In such circumstances, pharmaceutical interventions may not be readily available, forcing governments to implement socio-behavioral policies such as lockdowns and mask-wearing mandates and companies to adopt remote and hybrid work practices. Natural experiments, such as the social isolation induced by the COVID-19 pandemic or incentive-based vaccine distribution programs aimed to bolster vaccine uptake during this time, provide an opportunity to assess retrospectively the effect of federal, state, or local government policies. Another example consists of leveraging new drug approvals and changes in clinical guidelines to learn from electronic health records (EHR) which existing treatments could be repurposed to delay neurodegeneration and/or increase longevity, and if so, for whom they would work best. However, unlike randomized controlled trials, natural experiments suffer from multiple sources of confounding. The use of appropriate causal inference methods can help mitigate confounding bias, including via weighting and regression discontinuity designs. This thesis illustrates the use of existing causal inference approaches in population health and proposes new methods to evaluate the effects of pharmaceutical interventions (Chapters 1 and 2), exogenous shocks (Chapters 3, 4, and 5), and socio-behavioral policies (Chapters 3 and 5) on the health and well-being of aging and other vulnerable individuals or populations. Specifically, Chapters 1 and 2 leverage the target trial emulation framework to study the comparative effectiveness of antidiabetic and antihypertensive drugs towards preventing dementia or delaying its onset, using EHR data from Mass General Brigham healthcare system. Our target trial emulations suggest the diabetes drug metformin and the antihypertensive drug class of angiotensin receptor blockers as potential repurposing candidates for dementia, especially if initiated before age 70. Chapter 3 uses regression discontinuity designs to quantify the benefits of a local vaccine companion program in Massachusetts during the COVID-19 pandemic. We estimate that this initiative may have bolstered vaccine uptake among older adults aged 75+ by up to 22 percentage points. Chapter 4 implements counterfactual time series modeling to estimate pandemic-period excess mortality associated with overdoses in the US, by substance and geography. We find ∼25,650 excess deaths nationally (March 2020-August 2021), disproportionately affecting Southern and Western regions of the country and attributable mainly to synthetic opioids, methamphetamines, and alcohol as well as polysubstance use. Chapter 5 characterizes changes in team coordination among knowledge workers at a large global tech company to better understand the rise of hybrid work practices and their potential implications for well-being. Using two-way fixed effect regression models, we find evidence of voluntary alignment of work schedules with managers and greater co-attendance among employees who were recently hired or work in shared office spaces. Collectively, these five studies demonstrate how we can effectively learn from data about past events, medical records, and office attendance logs, to provide insights that inform the design of future public health strategies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159148</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data futures: Transforming digital traces into public goods in the age of commercial surveillance</title>
<link>https://hdl.handle.net/1721.1/159147</link>
<description>Data futures: Transforming digital traces into public goods in the age of commercial surveillance
Berke, Alex
For decades, government agencies have collected surveys to produce datasets and statistics that serve as public goods, enabling research and empowering communities from whom data are collected. These data sources are costly to collect and are in decline as survey response rates drop. In contrast, increasing quantities of data are collected from the public by companies -- data we unavoidably generate by making purchases, using the Internet, or simply operating a mobile phone.  This data collection might be considered a form of surveying the public, but where privatized datasets empower corporations rather than communities, and the ensuing potential harms cannot be empirically assessed without access to these data. &#13;
&#13;
This thesis considers a future where corporations can more accurately track populations and estimate statistics than the government agencies traditionally tasked with such efforts. This thesis illustrates how this future may be nearby and explores resulting questions through case studies. Namely, are there more privacy-preserving or equitable or cooperative ways to manage these data, to benefit the public from whom they are sourced?&#13;
&#13;
The first set of case studies use location data from mobile phones, first developing a more privacy-preserving approach by leveraging recurrent neural networks to generate realistic synthetic data, and second developing aggregated mobility metrics to improve country level population estimates and COVID-19 epidemic models. The next set of case studies use web browser data to evaluate risks of cross-site user tracking that are present despite privacy-enhancing browser developments. The first web study repurposes data collected by a data broker; the second uses a dataset we crowdsourced and openly published to benefit this research and future research. For the next set of case studies, we crowdsourced and published a first-of-its-kind open dataset of purchase histories from thousands of Amazon.com users, along with their sociodemographics. We use this dataset to demonstrate how corporate data can provide insights into societal changes and also evaluate privacy risks due to inferring sensitive consumer information from purchases.&#13;
&#13;
The data used in this thesis (mobile device locations, web browsing data, purchase histories) are examples of digital traces collected continuously from people throughout everyday activities, without explicit consent. This work points towards cooperative data sharing as a paradigm to empower research that benefits the public while prioritizing consent. Could such a paradigm exist with public support and participation? In order to study this and inform future crowdsourcing efforts, we embedded behavior experiments and surveys into our crowdsourcing tools, shedding light on what impacts users' likelihood to share their data, how users believe their data should be used, and how results differ across demographics.&#13;
&#13;
Throughout these studies, this thesis asks a broader question: Can we envision, and build towards, a future with alternative data economies that shift the power dynamics of data collection, along with the control and benefits of these data? To begin to address this question, this thesis proposes speculative, privacy-enhancing, and cooperative commerce networks. Such system changes may incur new costs for consumers. The final case study measures consumers' willingness to pay for privacy in new package delivery networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159147</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Thread Maturity in Manufacturing: A Cross-Industry Study Using the Model-Based Enterprise Capability Assessment Framework</title>
<link>https://hdl.handle.net/1721.1/159146</link>
<description>Digital Thread Maturity in Manufacturing: A Cross-Industry Study Using the Model-Based Enterprise Capability Assessment Framework
Peters, Michael Scott
Modern-day manufacturing organizations find themselves in volatile and competitive markets with increasing pressure to deliver products faster, at lower cost, and with increased quality. In response to this pressure, many organizations are considering how technological advancements may improve the efficiency of their product development operations. Leading organizations have digitally transformed their businesses by shifting away from manual processes, static documents, and siloed operations toward automation, model-based data, and interconnectivity enabled by a digital thread. Accordingly, organizations pursuing the competitive edge offered through the digitalization of their business operations have often used different assessment tools to benchmark their current capabilities and define their vision for the future of their organizational operations.&#13;
&#13;
This thesis proposes a set of model-based and digital thread capabilities that are central to the long-term success of product development operations, along with a corresponding maturity model that may be used to identify gaps between current- and future-state capability implementation. Using the proposed capability maturity model, known as the Model-based Enterprise Capability Assessment Framework (MECAF), this study evaluated and compared capability maturity across various organizations in the Aerospace and Defense, Automotive, and Heavy Machinery industries. Through interviews with each participating organization, this thesis also explores the expected benefits, common challenges, and anticipated value of implementing model-based capabilities. Additionally, this thesis proposes an approach to bridging the gap from strategy to implementation based on the lessons learned and best practices of the organizations studied.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159146</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Productivity in the Workplace for Product Development Teams</title>
<link>https://hdl.handle.net/1721.1/159145</link>
<description>Productivity in the Workplace for Product Development Teams
Farfan Perdomo, Jorge
Productivity is a measure of the value generated for every hour worked. In a product development team, productivity can be affected by endogenous and exogenous factors, such as biological rhythms, work style, availability, work interruptions, team size, location, and the management strategies taken in a project. These factors will have an effect on the amount of effective work value generated in a workweek.&#13;
&#13;
A mathematical model and a Monte Carlo simulation were used to quantitatively assess the impact of these factors on the estimated cost and duration of a product development project. Based on the model results, we determined that workweek capacity and interruptions in the workplace are central to productivity. In addition, we demonstrated that combining different management strategies could be used to bring the project back on schedule and within budget to reduce the effects of these inefficiencies due to diverse endogenous and exogenous factors.&#13;
&#13;
For these reasons, this case study on a product development project will provide insight to engineering managers and project leaders about the effects of these inefficiencies in the workplace. The findings will help pave the way toward a more accurate project estimation and better modeling of project dynamics to reduce the amount of uncertainty in product development teams.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159145</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of a Nonblocking Randomized Work Stealing Scheduler</title>
<link>https://hdl.handle.net/1721.1/159144</link>
<description>Design and Implementation of a Nonblocking Randomized Work Stealing Scheduler
Ali, Sabiyyah
This thesis presents FLCN (Free of Locks, Cilk is Now), a nonblocking work-stealing runtime scheduler that supports Cilk multithreaded programming. The existing OpenCilk runtime system uses lock-based synchronization and thus suffers from lock contention, does not provide progress guarantees, and can experience performance degradation with high worker counts and in multiprogrammed scenarios. FLCN leverages the existing runtime system’s provably efficient scheduling algorithm and introduces several new data structures and concurrency protocols to form a correct and performant lock-free system. In addition to enabling fork-join task parallelism, FLCN supports other Cilk features such as reducer hyperobjects. Through analyzing the performance of FLCN on various canonical benchmark programs, I find that for programs with low amounts of work, FLCN performs worse than the existing runtime. However, for most programs, I find that FLCN is either competitive with or marginally outperforms the existing runtime. Additionally, FLCN consistently exhibits higher scalability than the existing runtime, performing especially better when using hyperthreads and in multiprogrammed environments. I also outline future work that could make FLCN a more comprehensive and performant system, including ideas for improving FLCN’s work efficiency that would in turn better its performance on programs with low amounts of work.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159144</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Role of Foundation Models for Training Generalist Robot Learning Policies</title>
<link>https://hdl.handle.net/1721.1/159143</link>
<description>Exploring the Role of Foundation Models for Training Generalist Robot Learning Policies
Feng, Eugenia Y.
Numerous methodologies to solving goal-conditioned short-horizon tasks require hundreds of expert demonstrations, but these demonstrations are effort-intensive to collect, reducing the scalability of these approaches. Even with approaches that do work, they may have difficulty generalizing to slightly different settings. In this work, we explore two approaches to training generalist robot learning policies using large-scale foundation models. &#13;
&#13;
The first approach aims to use a video foundation model to generate task-conditioned synthetic demonstrations at scale from a single expert demonstration. The objective is to leverage these synthetic demonstrations as proxy for expert demonstrations to train models that learn rewards from expert videos for solving complex visual RL problems. &#13;
&#13;
The second approach seeks to improve upon the generalization ability of behavior cloning policies. Moving away from the use of videos for training, we explore using privileged representations such as keypoints or object-poses learned using open-set foundation models. By tracking pose or keypoint correspondences, the aim is to minimize the required number of demonstrations to achieve task completion and improve generalization within classes of objects.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159143</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prompt Injection Generation Using Small Language&#13;
Models with Reinforcement Learning with Artificial&#13;
Intelligence Feedback</title>
<link>https://hdl.handle.net/1721.1/159142</link>
<description>Prompt Injection Generation Using Small Language&#13;
Models with Reinforcement Learning with Artificial&#13;
Intelligence Feedback
Gupta, Aneesh
Large language models (LLMs) have become an integral part of many fields from customer support automation to research assistants. However, despite their growing adoption, they face significant challenges, particularly when it comes to safety in sensitive contexts. Existing methods like Reinforcement Learning with Human Feedback (RLHF) and keyword filtering have contributed to improving the robustness of these models, but these approaches are very resource-intensive and the models can still be vulnerable to malicious attacks like prompt injections and jailbreaking. One notable limitation in testing defenses against such attacks is the scarcity of appropriate datasets. This thesis investigates the use of small language models (SLMs) to generate goal hijacking messages, a subset of prompt injection messages. Techniques such as LoRA fine-tuning and full fine-tuning of even smaller models are employed in this short form text generation model. We also introduce a fine-tuned SLM enhanced with Reinforcement Learning with Artificial Intelligence Feedback (RLAIF), which removes reliance on slow human feedback by using faster AI-generated feedback instead. By optimizing the reference model and reward functions, we improve alignment with ground truth prompt injection messages while addressing issues such as mode collapse and overfitting. These findings show promise, and further research is necessary to determine how well the approach can generalize to other domains and perform in real-world scenarios. Future work is likely to focus on multilingual datasets and distributed computation to further extend the applicability and efficiency of the method.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159142</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Diffusion Models to Enable Efficient Sampling for Task and Motion Planning on a Panda Robot</title>
<link>https://hdl.handle.net/1721.1/159141</link>
<description>Learning Diffusion Models to Enable Efficient Sampling for Task and Motion Planning on a Panda Robot
Johnson, Quincy
A search then sample approach to bilevel planning in the context of task and motion planning is one method of effectively solving multi-step robotics problems. In this planning framework, high-level plans of abstract actions are refined into low-level continuous transitions by sampling controller parameters associated with each action. Efficiently sampling these parameters remains a significant challenge, as exhaustive searches often become computational bottlenecks, especially for tasks requiring complex or multimodal parameter distributions. Moreover, relying on samplers hand-designed by humans is both impractical and limiting. To address these challenges, we propose using diffusion models to learn efficient sampling distributions from demonstrations. By avoiding the limitations of hand-specified and naïve sampling methods, our approach enhances planning efficiency and achieves superior performance across diverse tasks that require learning multimodal parameter distributions to solve successfully.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159141</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling, Design, and Assembly of Spring Tires</title>
<link>https://hdl.handle.net/1721.1/159140</link>
<description>Modeling, Design, and Assembly of Spring Tires
Lu, Michael
With a renewed interest in the Moon and the need for autonomous lunar rovers that drive longer distances and operate over extended durations, designing efficient and robust mobility systems is paramount. Created by NASA Glenn Research Center, the spring tire is a compliant airless tire engineered for planetary rover missions in lunar and Martian environments. It consists of hundreds of coiled springs woven together to create a toroidal-shaped mesh wheel that can deform to uneven terrain, providing additional durability and traction. This work aims to apply this technology to two robotic testbeds: ERNEST, an autonomous lunar traversal rover built at NASA Jet Propulsion Laboratory, and IPEx, a lunar regolith mining robot built at Kennedy Space Center. This thesis discusses the modeling of these spring tires with numerical methods along with the design of two spring tire prototypes for use on the aforementioned rover platforms. A streamlined assembly process for these compliant wheels is also outlined as well as the results of compression testing, rough terrain driving, and drawbar pull testing to assess their performance.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159140</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>1863 Virginia: A short story</title>
<link>https://hdl.handle.net/1721.1/159139</link>
<description>1863 Virginia: A short story
Green II, Kelvin
The question that motivates “1863 Virginia: A short story” is rooted in interracial solidarity and whether it exists outside of a common enemy. During this time in U.S. history, free and enslaved black people; slave-owning and poor white people; and assimilated and resistant native people co-existed. The story follows Indi, a Pamunkey woman, and Abram – a self-liberated and formerly enslaved African man from White House plantation. Due to her tribe's Black Laws, Indi is exiled for giving birth to a child of a Black man. Abram loses the love of his life to his murderous master Mr. Lee and runs away from White House plantation where he stumbles across Indi, Baby Joseph, and another person Indi took in during her time in exile named Sophia. Slave catchers come to Indi’s home looking for Abram and she must decide whether she will give him up or defend him. The text seeks to understand the interior character of people surviving impossible realities while also staying true to the connection of human beings and nature. There is a character Mae, a horse, who expresses herself and the river Pamunkey, who speaks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159139</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of High Harmonic Fast Waves Interactions in the Scrape off Layer of NSTX-U</title>
<link>https://hdl.handle.net/1721.1/159138</link>
<description>Study of High Harmonic Fast Waves Interactions in the Scrape off Layer of NSTX-U
De Levante Rodriguez, Ricardo Antonio
High-harmonic fast wave (HHFW) heating experiments in the National Spherical Torus Experiment (NSTX) at Princeton Plasma Physics Laboratory (PPPL) have shown that up to 60% of the injected power can be lost in the Scrape-Off Layer (SOL) when the fast wave is able to propagate in front of the antenna [Hosea, Phys. Plasmas 15, 056104 (2008))]. This work discusses progress in modeling HHFW propagation and losses in the divertor region using more realistic SOL plasmas in the NSTX-U SOL 2D geometry. Previous RF studies assume density is a function only of magnetic flux, decaying exponentially, which may be insufficient to accurately determine the wavefield, especially in the divertor and high-field side plasma regions. In this work, the temperature profile is first evaluated by solving the non-linear heat conduction equation using a finite element approach in the Petra-M workbench assuming axisymmetry. A 2D density profile is then obtained from a prescribed outer midplane radial profile assuming pressure is uniform on a flux surface. This approach results in density and temperature profiles in which the strong asymmetric nature of diffusion is successfully captured. In particular, it is shown that for a parallel to perpendicular heat conduction anisotropy ratio of up to 10⁸ the expected exponentially decaying temperature profile is obtained using a non-linear iterative solver with proper mesh refinement conditions. Furthermore, this work focuses on investigating the effect of the SOL plasma density profile on the fast-wave propagation at different antenna phasing. The simulation results show that the gradient of the midplane density profile affects the wavefield pattern. As the density profile broadens, the wavefield intensity is reduced in the SOL and increased in the core. Finally, HHFW power in the plasma was studied by adding electron-ion collision power dissipation as a proxy for HHFW power deposition. The simulation results show that increasing the density gap width between the antenna and the core results in more power deposited in the SOL relative to the core.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159138</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable and Sustainable Microwave Power Beaming to&#13;
Mobile Lunar Surface Assets</title>
<link>https://hdl.handle.net/1721.1/159137</link>
<description>Scalable and Sustainable Microwave Power Beaming to&#13;
Mobile Lunar Surface Assets
Ng, Chu Pang Alex
Lunar missions are hindered by the challenges of maintaining continuous operation, especially during the 14-day lunar night, when solar power sources may be unavailable, causing significant mission delays and limiting efficiency. Frequent returns to charging stations supplied by fixed lunar surface power plants further disrupt workflows and restrict the operational range of lunar vehicles. To address these issues and enhance lunar mission performance, a continuous, secure, and shareable power source is essential. While nuclear power and larger battery systems are viable options for continuous lunar energy supply, they pose challenges such as safety risks, complex deployment, and limited scalability. This thesis focuses on exploring microwave-beamed power systems as a flexible and scalable solution for sustained lunar operations. Ideally, the power source would enable 24/7 operations without requiring vehicles to return to base stations, allowing for unrestricted navigation across the lunar surface, including in permanently shadowed regions (PSR). In addition, it would support the construction of critical infrastructure, accelerating the development of the lunar economy. This thesis aims to support sustained lunar exploration and infrastructure development by exploring the design space for microwave-beamed power systems under three different demand use cases of increasing scale, loosely corresponding to the three phases of the Artemis program: Local (Shackleton Crater), Regional (navigation between equatorial regions and South Pole), and Global (entire lunar surface). A case study focused on the YUTU-2 lunar rover investigates alternative architectures for each use case, comparing power beaming from tall towers vs. satellites. Evaluation reveals that the most effective solution for the Local use case is a tower-based approach featuring a single 100m tower, &gt;10,000 solar modules, and using 1 GHz operating frequency, at a cost of $3.4M/W. For the Regional use case, a satellite-based solution is preferred, utilizing 6-7 satellites per plane, 210,000 solar modules, and a frequency range of 1.0 GHz, at a cost of $1.7M/W - $1.8M/W. The Global use case also favors a satellite-based approach, employing 6 satellites per plane across 5 polar planes, with varying numbers of solar modules and utilizing a frequency of 1 GHz, at a cost of $0.8M/W. The trade studies showed that larger receiver antenna areas and lower frequencies improve performance and cost-effectiveness. Furthermore, larger microwave-beamed power systems leverage economies of scale, lowering the cost per watt by an average of $1M/W when scaling from the Regional to the Global power system, with potential for further reductions through future expansions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159137</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting Expertise Influence on Teamwork in Sustainable Urban Design Workshops through a System Model</title>
<link>https://hdl.handle.net/1721.1/159136</link>
<description>Detecting Expertise Influence on Teamwork in Sustainable Urban Design Workshops through a System Model
Li, Chen
The design of sustainable urban communities near transportation hubs, such as train stations, may play a vital role in enhancing neighborhoods by fostering new jobs, encouraging mixed-use developments, and promoting a cleaner environment. The engagement of experts and non-experts is often promoted as part of the urban planning process, yet workshops, while motivating, do not necessarily affect the systems design and long-term sustainability of the neighborhood in a substantive way.&#13;
 &#13;
Prior studies present methods for detecting teamwork during the design of complex systems, including model-based co-creation and urban design workshops. While interactive model-based workshops promote increased engagement of non-experts, the traditional role of experts in framing the design options and the workshop dialogue remain. This thesis research seeks to examine how expertise shapes decision-making in urban sustainability contexts using enhanced system models. &#13;
 &#13;
The research approach focuses on sustainable urban design workshops for compact city development, following three key steps.  First, a neighborhood system model incorporating a commute flow simulator is developed to support collaborative exploration and design decision-making processes. Second, during a pilot experimental workshop, participants are divided into control and treatment groups, challenged to design a vibrant community with economic, social, and environmental benefits. The treatment group receives an expert-proposed, advocated solution to assess its impact on exploration and decision-making. Finally, results are analyzed using Large Language Models (LLMs) and statistical methods to assess how expert-driven solutions impact teamwork collaboration, decision-making speed, and final design alignment with the advocated solution.&#13;
&#13;
While the pilot workshop primarily serves to validate the approach and test the methodology, conclusive results cannot be drawn due to its exploratory nature. Nevertheless, this research successfully developed a robust urban design system model, enabling stakeholders to generate innovative solutions that foster a thriving community. Additionally, it established a methodology to advance the understanding of expertise in teamwork dynamics, laying a strong foundation for future studies in teamwork analysis and urban design challenges.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159136</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Discovery via Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/159135</link>
<description>Generative Discovery via Reinforcement Learning
Hong, Zhang-Wei
Discovering new knowledge is crucial for technological advancement and mirrors how humans and animals learn new skills—often through trial and error. Ancient humans, for example, discovered fire by experimenting with different methods, and children learned to walk and use tools through repeated attempts and failures. In chemistry, scientists find new catalysts by testing various compositions. But how exactly do humans use trial-and-error to improve existing solutions (like learning more efficient ways to walk or synthesizing novel compounds)? Can we design computational models that mimic or exceed human discovery? Such computational models could greatly accelerate progress in science and engineering since they can automate or assist human scientists’ and engineers’ works and discover new knowledge more efficiently (e.g., new compounds, streamlining the robot controller design, etc.). Reinforcement learning (RL) is well-suited for discovery tasks because it enables machines to learn through trial and error. My work overcomes the following major limitation of today’s RL algorithms and thereby advances their discovery potential: Mitigate the bias of reward shaping. RL relies on reward signals from trial-anderror experience, but these signals can be sparse, meaning they are only provided once a desired solution is found and otherwise zero. Most trials, therefore, offer little to no feedback. A common strategy to improve performance under sparse rewards is to provide additional hints (i.e., reward shaping) to guide RL algorithms. However, if these hints are inaccurate, they can steer the algorithm toward worse solutions than those without them. I propose a new RL framework that can be combined with any standard RL algorithm, ensuring that training with hints finds better solutions instead of harming performance. Learning with sub-optimal data. RL can learn not only from online interaction with the world but also from datasets of logged experiences. For expensive or time-consuming tasks like material discovery or robot learning, offline RL could be preferred because it leverages existing data rather than requires new interaction with the world. However, such datasets could contain mostly low-reward solutions, which limits the offline RL algorithm’s performance in finding solutions better than what’s in the dataset (as we show later in this thesis). I introduce sample reweighting strategies that reweight the dataset in a way that current offline RL algorithms trained with the weighted samples are able to discover solutions far better than what’s in the dataset, even if low-reward solutions predominated the dataset. Safety via Diversity. Standard RL algorithms aim to find a single “best” solution. Yet, in many discovery problems—such as drug development—it is more valuable to generate multiple high-rewards solutions with distinct properties (i.e., diversity) than to focus on only one. I study this problem in an emerging discovery task-red-teaming large language models (LLMs). In red-teaming, we desire diverse prompts that trigger undesired outputs from target language models. Current approaches leveraging RL to train an LLM to red-team another one, but they fall short of the diversity of generated prompts and often converge to a few prompts that consistently trigger undesired outputs. I propose to reward the agent to maximize the diversity of generated prompts, which also improves the the success of prompts at triggering undesired outputs from the target LLM.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159135</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Gap: Generative Machines and Inventive Minds</title>
<link>https://hdl.handle.net/1721.1/159134</link>
<description>Bridging the Gap: Generative Machines and Inventive Minds
Singh, Nikhil
Recording technologies, from the phonograph to digital media, have profoundly reshaped the human experience by enabling the capture and reproduction of our sensory world. These technologies allow us to relive experiences through artifacts of remarkable fidelity like photographs and videos, extending the reach of our perception and memory. Of course, we didn’t stop at the phonograph; we have built a rich ecosystem of tools for creating, sharing, and exploring recorded media that have had transformative effects on cognition and culture. Recently, a new and powerful class of tools has emerged: generative models. Unlike recorded media, which reproduces external experiences, generative models can translate our ideas directly into artifacts. Here, ideas refer to abstract mental constructs that seed media creation, externally expressed in text prompts, sketches, vocalizations, or other intuitive representations. Just as recorded media augmented our ability to perceive and remember, generative media promises to expand our ability to imagine and invent by offering a more immediate path from cognition to high fidelity creation. Creative work often has us operating at our limits, negotiating boundaries between knowledge and novelty, skill and aspiration, from individual exploration to collective understanding. Generative models, in principle, have the potential to scaffold and accelerate how we transcend these limits by increasing the efficiency with which we discover and pursue new ideas. In this thesis, I suggest that realizing this potential presents a complex set of challenges that span computation and design. I argue that it requires us to develop a rich stack of precision tools for human-AI co-creation, as we have done and continue to do for recorded media. Specifically, I present contributions across two key dimensions of this:&#13;
1. Computational machinery that supports creative work. I present research on topics including visually-driven acoustic simulation, interpretable and controllable sound generation from descriptions, and audiovisual content understanding. Focusing on sound as a case study, I describe systems that effectively represent and manipulate creative knowledge across modalities and levels of abstraction. &#13;
2. Interactive systems and studies that investigate the integration of human and machine effort in content creation. This includes work on conceptual integration in AI-assisted story writing, author-in-the-loop description authoring for accessibility of complex scientific figures, and generative constraints for human ideation. In all, this work seeks insights for designing systems that support human creators through exploration, collaboration, and feedback, rather than aiming to replace or constrain human agency and expertise. &#13;
To conclude this thesis, I present a discussion on bridging AI and HCI to gain insights into human creative work and develop stable, generalizable design knowledge for augmenting it. I argue for the design of flexible, parametric tools that enable systematic study of creative behavior under different augmentation designs. Based on this, I propose a conceptual framework to seed the development of a more robust science of human-AI co-creation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159134</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Multi-Z Impurity Transport in Tokamaks using Neural Networks</title>
<link>https://hdl.handle.net/1721.1/159133</link>
<description>Investigation of Multi-Z Impurity Transport in Tokamaks using Neural Networks
Johnson, Jamal
Achieving clean, sustainable energy at scale is a pressing global challenge. Fusion of light elements holds significant potential to address this critical need. While only experimental fusion reactors are currently operational, significant progress is being made in the research and design of near-future tokamak fusion power plants. Reactor success will depend on a comprehensive understanding of heat and particle transport, including the role of impurities. This thesis focuses on the development of machine-agnostic neural network surrogates for TGLF, designed to predict impurity transport coefficients alongside heat and electron particle fluxes in DD plasmas. Training data are derived from synthetic fluxes generated for L, H, and I confinement modes in Alcator C-Mod, DIII-D, and ASDEX-Upgrade. To reduce training complexity, shot data are discretized by radius, and networks are developed at six ρ coordinates: 0.2, 0.4, 0.6, 0.7, 0.8, and 0.9. Fifteen plasma parameters are selected as inputs to the neural networks after examining TGLF flux sensitivities across all five output channels. Predicted impurity fluxes for arbitrary charge states and masses, ranging from 4He to 184W, are used to derive diffusive and convective transport coefficients. Three types of synthetic TGLF data are created and applied to network training to produce accurate models. The primary synthetic data type approximates experimental data by sampling within a perturbation range of ±10% around a given shot. Supporting data types enhance network performance by improving trends in single-parameter (1D) scans and addressing areas of highest network uncertainty. Hyperparameter optimization and testing resulted in highly accurate networks. Testing set relative errors averaged over ρ = 0.4–0.7 and 0.9 show approximate deviations of 0.12 ± 0.029 for heat flux and 0.42 ± 0.095 for particle flux channels. However, error metrics at ρ = 0.2 and 0.8 require location-specific tuning and potentially more data to match the accuracy achieved at other radii. The networks are used to analyze boron and carbon impurity peaking within machinespecific H-modes. Their predictions are then compared to published results. Qualitative results for boron peaking correlations in ASDEX-Upgrade are clearly reproduced, while carbon peaking trends in DIII-D are weaker. Sparse DIII-D data, which also includes atypical advanced modes, is believed to have contributed to reduced accuracy in these cases. Using H-mode shots spanning low to high local collisionality, impurity diffusion trends with charge state (Z) in ITG and TEM dominated plasmas were examined, showing good agreement with published studies. Additionally, analysis of network-derived convective transport shows that Z-sensitivity increases with collisionality. Network scans of the ion and electron heat flux responses to temperature gradients also reveal the clear presence of a critical gradient at all radii. These results demonstrate that the neural networks developed in this work can reliably reproduce TGLF results and deliver fast predictions of heat, electron particle, and impurity transport in tokamaks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159133</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Inference Under Privacy Constraints</title>
<link>https://hdl.handle.net/1721.1/159132</link>
<description>Causal Inference Under Privacy Constraints
Yao, Leon
Causal inference is an important tool for learning the effects of interventions in observational or experimental settings. It is widely used in many fields such as epidemiology, economics, and political science to find answers like the average treatment effect of a medical procedure or the individual treatment effect of a personalized ad campaign. In commercial applications, the era of big data allows companies to increase their experiment volume, incentivizing them, in turn, to collect more user data. On one hand, large volumes of data are necessary to train generative models like ChatGPT. At the same time, companies’ increasing use of user data has drawn heavy criticism and consumer backlash, incurring legitimate concerns about privacy and consent. As concerns over user data safety and privacy grow, rules and regulations like GDPR change what kinds of data companies and researchers can acquire and how they can analyze the data. The necessity of now performing causal inference under a range of privacy constrants has carved new spaces for research at the intersection of causal inference and privacy. In my thesis, I will be exploring three paradigms for protecting user data — data minimization, differential privacy and synthetic data — and how to perform causal inference techniques under these new privacy regimes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159132</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Urban Mining &amp; Regenerative E-Waste Ecosystems: Visions towards Sustainable Entrepreneurial Futures for Informal Settlements and Recycling Communities</title>
<link>https://hdl.handle.net/1721.1/159131</link>
<description>Urban Mining &amp; Regenerative E-Waste Ecosystems: Visions towards Sustainable Entrepreneurial Futures for Informal Settlements and Recycling Communities
Pierre, Georine
In the face of the growing challenge of urban waste, especially within rapidly expanding informal settlements projected to house over 45% of the global population by 2050 (United Nations Department of Economic and Social Affairs, 2022), innovative solutions are imperative. The thesis proposes a paradigm shift towards urban mining, emphasizing the significant value embedded in discarded electronics—where a tonne of circuit boards can hold ten times more precious metals than traditional ore (Minnesota Center for Environmental Advocacy, 2022). The global distribution of off-shored e-waste has led to the emergence of informal settlements that depend on e-waste recovery to support livelihoods and income generation. These communities have become prime examples for urban mining, embracing circular economic strategies to find adaptive ways to repurpose e-waste. Accra, Ghana’s Old Fadama, home to one of the largest e-waste sites in the world, has become a vital economic hub for informal e-waste processing.  With a population of over 100,000 dwellers, local and migrant workers have built resilient communities through innovative recycling practices, tech repairs, and DIY digital fabrication methods. However, they face imminent environmental risks, health hazards, and displacement threats.&#13;
&#13;
Focusing on Old Fadama, the thesis will address the narratives of urban mining communities and look toward a systematic sympoiesis between economic, environmental, and social realities. By doing so, the thesis seeks to answer how we can foster nurturing and circular relationships for informal settlements and develop regenerative ecosystems for urban mining in the city environment. As an integrated field research, case study, and implementation, the thesis will: conduct key urban analysis for understanding e-waste sites and urban mining communities; identify technology interventions and policy recommendations that can improve local conditions; and utilize data-driven communication to advocate for new opportunities for urban systems tied to e-waste extraction through immersive multimedia as part of a public exhibition.&#13;
&#13;
Using a novel methodology, the thesis adopts the learnings from the economic, physical, and community-based interventions observed in informal e-waste recovery processes. The thesis combines quantitative data from satellite imagery and remote sensing with qualitative insights gathered through crowdsourced GIS mapping, films, interviews, and creative capacity-building workshops. These combined insights aim to enhance urban models, nurturing the innovation potential already present within urban mining communities. The thesis research will contribute to the previous work of MIT City Science Group’s “Power of Without” initiative, a comprehensive roadmap for understanding and collaborating with informal settlements and proposing non-Western decentralized infrastructure solutions. The thesis aims to provide practical insights for implementing innovations in urban mining communities by developing sustainable e-waste recovery strategies and supporting micro-industries in cities, which could serve as a model for similar contexts globally.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159131</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferencing Techniques for Enhanced Monitoring of Thermal-Fluid Systems</title>
<link>https://hdl.handle.net/1721.1/159130</link>
<description>Inferencing Techniques for Enhanced Monitoring of Thermal-Fluid Systems
Kim, Haeseong
Sensor data augmentation for accurate system monitoring is relevant to many engineering applications, as there is often a gap between available instrumentation and measurement needs. Installing sensors can be limited due to factors such as harsh environmental conditions, the need to avoid operational distortions, and limited space. While continued efforts to develop novel sensor technologies to improve measurement density and quality are important, it is equally crucial to maximize the use of data from existing sensors and measurements. In this work, we employed physics-based methods to solve inverse heat transfer (IHT) problems. Because accurate and well-understood physics models provide strong prior knowledge, physics-based IHT can provide clear solution with use of small amount of temperature measurements. However, existing work in IHT relies on 'perfect' physics models and has been used to solve relatively simple problems such as conduction heat transfer problems. This thesis extends the IHT problem scope to thermal fluid systems, including the efficient use of sensor data and uncertainty quantification (UQ).&#13;
&#13;
We leveraged high-resolution thermal-fluid experiments to demonstrate the solution of two types of IHT problems. The first problem estimates the operating conditions of the experiment based on the minimal use of sensors from high-resolution temperature data. The estimated solution is used to reconstruct the entire temperature distribution on a heating surface, while the rest of the data is used to validate the inverse problem methodology. The estimation result is supported by UQ considering measurement errors and modeling errors that adds value to the estimation. The second IHT problem consists of identifying sharp-featured 2D heat source distributions with an array of temperature sensors from a subset of experiment data. Solving IHT involved regularization prior with strong sparsity-promoting capability. The designed iterative solution optimization process finds the unknown heat source distribution as well as regularization hyperparameter. In addition, Bayesian inference enhanced the solution quality by providing UQ of the heat source magnitude.&#13;
&#13;
Expanding the scope of IHT problems, we also addressed online state estimation in dynamic systems. This work focuses on a hypothetical inverse conduction problem of a transient heat source in a composite materials system. The physics modeling of system is assumed to include uncertainty arising from gap thermal resistance at material interfaces, which complicates the estimation of an internal heat source from external sensor data. To address this challenge, the IHT approach leverages future time-step measurements to correct estimates at the current time step, enabling more efficient use of limited sensor information. The approach is sampling-based and its statistics provides UQ on the quantity of interest.&#13;
&#13;
While this work addresses inverse problems within specific thermal-fluid systems, the methodology is designed for broad applicability beyond these cases. It lays the groundwork for advanced sparse sensing and inverse problem-solving in thermal systems, offering a more efficient, tractable, and reliable tool for engineers and researchers addressing system monitoring with modeling uncertainty. Looking forward, these methodologies could be valuable for digital twin applications, where live sensor measurements are integrated to provide robust, real-time estimation of the state of physical systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159130</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Blockchain Technology for Enhancing Genomic Data Management: A Multidisciplinary Framework for Privacy, Trust, Identity Protection, and Equity</title>
<link>https://hdl.handle.net/1721.1/159129</link>
<description>Leveraging Blockchain Technology for Enhancing Genomic Data Management: A Multidisciplinary Framework for Privacy, Trust, Identity Protection, and Equity
Niu, Yuner A.
The effective adoption of blockchain technology in genomic data management is influenced not only by its technical advantages but also by external factors such as regulatory conditions, and the demands of consumers and patients. This thesis explores the critical factors required for blockchain platforms to thrive in managing genomic data, focusing on how these systems can be structured to address the high-priority needs of various stakeholders, including patients, healthcare providers, regulators, and researchers. Through a comprehensive examination of privacy, security, regulatory compliance, and equity concerns, the research develops a multidisciplinary framework that balances technological innovation with real-world stakeholder expectations. By conducting an in-depth stakeholder analysis and analyzing existing blockchain platforms used for genomics, the thesis presents a roadmap for creating blockchain solutions that are both technologically viable and aligned with the complex social, legal, and ethical landscape of genomic data management. This framework aims to maximize value for all stakeholders while mitigating associated risks, positioning blockchain as a viable tool in the future of personalized medicine.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159129</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Representation Learning for Predicting Genetic Perturbation Effects on Single Cells</title>
<link>https://hdl.handle.net/1721.1/159128</link>
<description>Causal Representation Learning for Predicting Genetic Perturbation Effects on Single Cells
Liu, Emily
Advances in sequencing technologies have significantly deepened our understanding of gene regulation in cells. Among these, Perturb-seq has emerged as a powerful technique, enabling high-resolution profiling of transcriptomic responses to genetic perturbations at the single-cell level. Such insights have profound implications for functional genomics and the identification of therapeutic targets. This thesis investigates the efficacy of mechanistic computational models for predicting the effects of previously unseen genetic perturbations on cellular expression profiles. While existing deep learning approaches excel at interpolating within observational data, they often struggle to extrapolate to novel perturbations. To address this limitation, this study introduces a hybrid framework that integrates a linear causal model, grounded in the gene regulatory network, with variational deep learning techniques.&#13;
&#13;
The proposed mechanistic model utilizes a learned gene regulatory network to represent perturbational effects as shift interventions that propagate through the network. This approach operates within a low-dimensional gene space, effectively capturing the essential information needed to reconstruct full transcriptomic profiles. By incorporating this mechanistic causal model into a variational autoencoder (VAE), the framework generates detailed and comprehensive transcriptomic responses while maintaining the capacity to handle noisy, large-scale single-cell data.&#13;
&#13;
Two deep variational architectures are explored within this framework, corresponding to different output distributions. The single cell variational inference (SCVI) architecture, employing a zero-inflated negative binomial output distribution, demonstrates challenges in learning perturbational data distributions. In contrast, a standard VAE architecture with a Gaussian output distribution on normalized gene expressions, when paired with the structural causal model, achieves superior performance compared to current state-of-the-art methods. This hybrid approach, termed the Single-Cell Causal Variational Autoencoder (SCCVAE), demonstrates robust capabilities in both interpolation and extrapolation.&#13;
&#13;
For observed perturbations, the SCCVAE framework reveals latent representations that identify functional perturbation modules and simulate single-gene knock-down experiments across varying penetrance levels. These findings highlight SCCVAE as a powerful tool for interpreting and predicting perturbational responses at the single-cell level, advancing the integration of causal and variational approaches in computational biology.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159128</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Economic Advantage Calculator: An Extension of the Quantum Tortoise and Classical Hare Framework</title>
<link>https://hdl.handle.net/1721.1/159127</link>
<description>Quantum Economic Advantage Calculator: An Extension of the Quantum Tortoise and Classical Hare Framework
Mejia, Frederick
For some algorithmic problems, quantum computation has the potential to provide enormous speedups over classical computers. However, the drastic slowdowns associated with running error-free quantum hardware make achieving these theoretical advantages challenging. Researchers and industry leaders planning for the future would benefit from understanding when it will be both feasible and advantageous to switch to quantum computing platforms. This thesis builds on the framework by Choi, Moses, and Thompson (2023) to evaluate the feasibility and timeline for achieving Quantum Economic Advantage (QEA)—the point at which quantum hardware can outperform comparably-priced classical machines for specific computational tasks. This thesis substantially extends and deepens this framework and introduces a calculator to make these analyses accessible. The model incorporates parameters from quantum hardware vendors, such as physical-logical qubit ratios and overall connectivity, alongside the computational complexities of specific problems, to estimate the year of QEA. Most of the parameters in the tool are freely adjustable, allowing users to explore how varying assumptions about quantum improvement and technological advancement influence the projected timeline for QEA.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159127</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure, Function, and Interaction in Protein Language Models</title>
<link>https://hdl.handle.net/1721.1/159126</link>
<description>Structure, Function, and Interaction in Protein Language Models
Zheng, Jared
In recent years, transformer architectures have shown remarkable capabilities in learning meaningful representations from text and images. This approach has been extended to the realm of protein sequences through pretrained protein language models, which have excelled in various protein engineering tasks. In this thesis, we investigate a pre-trained protein language model’s ability to predict protein structure and the effects of mutations. For many advanced protein understanding tasks, such as predicting protein function and protein-protein interactions, fine-tuning of the model is essential. We explore methods to fine-tune the Evolutionary Scale Modeling (ESM2) model, a pretrained protein language model, for predicting protein functions structured as Gene Ontology terms and predicting protein-protein interactions. Notably, we develop a novel method of modeling the hierarchy constraint in GO term prediction that improves training convergence and test performance while making the model hierarchically consistent with GO. This research aims to enhance our understanding of protein language models in decoding complex biological information, thereby contributing to advancements in computational biology.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159126</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Essays on the Economics of Land Use, Environmental Value, and Public Spending</title>
<link>https://hdl.handle.net/1721.1/159125</link>
<description>Three Essays on the Economics of Land Use, Environmental Value, and Public Spending
Larson, Kelsey R.
Across the world, public spending on government programs profoundly alters land use, preservation of environmental value, and the wellbeing of rural populations. These essays explore three such programs and derive lessons for improving their targeting. Chapter 1 tests the effect of conservation easement tax incentives on land conservation in Virginia, using a difference-in-difference design around a 2002 tax reform. This finds that the environmental quality distribution of easements is wide and matches the statewide quality distribution of all undeveloped land, suggesting the program has considerable room to improve targeting. Increasing tax incentives attract donations of similar or lower quality, but targeting tax incentives only at high-quality land would substantially increase high-quality acres at a cost of 1.18 low-quality acres per high-quality acre. Chapter 2 investigates the targeting of short-term incentives for long-term behavior change, focusing on the case of the EQIP agricultural incentives program. The model connects the short-term and long-term effects of incentives as products of the immediate adoption costs and long-term repeated costs and benefits of a practice. If populations vary primarily by adoption cost, targeting groups with the greatest short-term effect will also maximize the long-term effect. If populations vary primarily by long-term costs and benefits, the groups with the greatest short-term impact are those for whom the practice is highly unprofitable in the long run, and a program can improve long-term impacts by instead targeting those for whom the practice is slightly profitable in the long run. A discontinuity analysis comparing successful and unsuccessful EQIP applicants shows that EQIP induces significant short-term change. Chapter 3 investigates the behavior of Mongolian livestock markets after severe weather shocks, and the role that a livestock insurance program may play in smoothing shocks. During severe Mongolian winters, livestock sales increase and prices fall as credit-constrained nomadic herders look to make necessary investments to protect their remaining herd. National integration in livestock markets absorbs a significant share of the weather-related shocks, as 40-60% of district price risk is due to national market fluctuations and 20-40% is due to province effects. This paper finds that national mortality strongly drives price variations, and livestock insurance reduces sales during high-mortality periods.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159125</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Healthcare Agents: Large Language Models in Health Prediction and Decision-Making</title>
<link>https://hdl.handle.net/1721.1/159124</link>
<description>Healthcare Agents: Large Language Models in Health Prediction and Decision-Making
Kim, Yubin
Large Language Models (LLMs) are transforming healthcare, yet utilizing them for clinical applications presents significant challenges. In this thesis, we explore two critical aspects in healthcare AI: (1) leveraging LLMs for multimodal health prediction from wearable sensor data and (2) developing collaborative AI framework for medical decision-making. We first introduce a Health-LLM framework that performs multimodal fusion of temporal physiological signals from wearable devices with contextual metadata to predict health outcomes. By implementing novel context enhancement strategies, our framework demonstrates significant improvements in prediction accuracy across multiple health domains compared to existing benchmarks. Furthermore, we present MDAgents, an adaptive framework that optimizes multi-agent LLM collaboration for complex medical reasoning tasks. MDAgents dynamically configures agent roles and interaction patterns based on task complexity, implementing a hierarchical consensus mechanism that emulates clinical team dynamics. Through comprehensive evaluation on medical diagnosis and reasoning tasks, MDAgents exhibits superior performance in&#13;
multimodal medical reasoning compared to single-agent approaches. Our findings demonstrate that LLMs, when architected for multimodal integration and strategic collaboration, can serve as robust agents in healthcare systems, advancing both preventive medicine through continuous health monitoring and clinical decision support through distributed AI reasoning.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159124</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Disease Resistance in a Reservoir Species for the Mice Against Ticks Project</title>
<link>https://hdl.handle.net/1721.1/159123</link>
<description>Engineering Disease Resistance in a Reservoir Species for the Mice Against Ticks Project
Buchthal, Joanna
This thesis explores the application of genome editing technologies to combat zoonotic infectious diseases through the development of a novel heritable immunization strategy targeting reservoir species. Focusing on Lyme disease, where white-footed mice (Peromyscus leucopus) serve as the primary reservoir, we propose embedding immunity into the germline of these animals to disrupt the disease transmission cycle and reduce the prevalence of the disease in the environment. By establishing genome engineering protocols for Peromyscus and demonstrating heritable protection against Lyme disease in genetically engineered Mus musculus, we show the feasibility of heritable immunization for long-term disease prevention. This work highlights the potential of genetic engineering for ecological interventions, offering a novel approach to public health challenges while fostering responsible community engagement in ecosystem engineering.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159123</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the Spatial Transcriptome Across Whole Organisms</title>
<link>https://hdl.handle.net/1721.1/159122</link>
<description>Mapping the Spatial Transcriptome Across Whole Organisms
Zhang, Ruihan
This study utilizes Expansion Sequencing (ExSeq) to thoroughly investigate the spatial transcriptome of the Caenorhabditis elegans (C. elegans) body. Beyond mapping gene distribution within individual specimens, this research sequences multiple C. elegans to identify both shared and distinct transcriptomic features. The findings lay crucial groundwork for future integration of transcriptomic data with in situ connectomics and in vivo neural activity recordings. Understanding the spatial transcriptome in C. elegans is vital for insights into neural circuit coordination, disease mechanisms, and developmental biology.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159122</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategizing against online learners in normal form repeated&#13;
games</title>
<link>https://hdl.handle.net/1721.1/159121</link>
<description>Strategizing against online learners in normal form repeated&#13;
games
Assos, Angelos
With the advent of machine learning and AI, learning algorithms are becoming more and more prevalent in online learning settings, where sequential decision-making is required. In such settings, the decisions of each agent can affect the utilities (or losses) of the other agents, as well as influence the decisions made by other agents later on in the interaction. Therefore, if an agent is good at anticipating the behavior of the other agents, in particular how they will make decisions in each round as a function of their experience thus far, he could try to judiciously make his own decisions over the rounds of the interaction so as to influence the other agents to behave in a way that ultimately benefits his own utility. In this thesis, we study repeated two-player games involving two agents: a learner, which employs an online learning algorithm to choose his strategy in each round; and an optimizer, which knows the learner’s utility function, parameters and the learner’s online learning algorithm. The optimizer wants to plan ahead to maximize his own utility while taking into account the learner’s behavior. We study this setting in zero-sum and general-sum games. In zero-sum games, we provide algorithms for the optimizer that can efficiently exploit a learner that employs a specific online learning algorithm in discrete and continuous-time dynamics. Specifically, the learner employs the Multiplicative Weights Update (MWU) algorithm for the discrete-time games, and the Replicator Dynamics in the continuous-time games. In general-sum games, we provide a negative result. Our negative result shows that, unless P=NP, there is no Fully Polynomial Time Approximation Scheme (FPTAS) for maximizing the utility of an optimizer against a learner that best responds to the history in each round. We additionally provide exponential-time algorithms that efficiently strategize against a learner that uses MWU, as well as a new way of thinking about strategizing against online learners via calculus of variations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159121</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Semantically Grounded, Long Horizon Planning&#13;
and Execution for Autonomous Agents</title>
<link>https://hdl.handle.net/1721.1/159120</link>
<description>Enabling Semantically Grounded, Long Horizon Planning&#13;
and Execution for Autonomous Agents
Covarrubias, Lucian
Robots have been playing an ever increasing role in complex environments, often in coordination with teams of systems or humans. Autonomous systems of the future will need to be tightly grounded in the real world, drawing information directly from their environment to develop an understanding of the world. They will need to maintain a semantic understanding of their environment, including the kinds of objects they observe and their relationships to each other. At the same time, they must be able to reason over diverse constraints related to their tasks, such as time limits and resource usage. While there are existing approaches which enable robots to execute tasks with semantic goals, such as finding a certain type of object in a room, they often fail to consider the multitude fo task specific constraints which are vital to robust performance. On the other hand, planners which consider task specific constraints require a human to provide all information about the environment manually. These systems are too cumbersome to model complex tasks, requiring hours of manual effort which is prone to errors. This thesis presents an architecture for semantically grounded planning which leverages the strengths of constraint based planners while automating the environmental modeling step with an advanced semantic perception engine. By automating environmental modeling, we are able to create a system which executes complex semantically grounded tasks such as navigating to certain objects within a certain room, without major user input which is typically required of these systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159120</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformers as Empirical Bayes Estimators The Poisson Model</title>
<link>https://hdl.handle.net/1721.1/159119</link>
<description>Transformers as Empirical Bayes Estimators The Poisson Model
Jabbour, Mark
We study the ability of transformers to perform In Context Learning (ICL) in the setting of Empirical Bayes for the Poison Model. On the theoretical side, we demonstrate the expressibility of transformers by formulating a way to approximate the Robbins estimator, the first empirical Bayes estimator for the Poisson model. On the empirical side, we show that transformers pre-trained on synthetic data can generalize to unseen prior and sequence lengths, outperforming existing methods like Robbins, NPMLE, and ERM monotone in efficiency and accuracy. By studying the internal behavior of the representations of the intermediate layers of these transformers, we found that the representation converges quickly and smoothly over the layers. We also demonstrate that it’s unlikely transformers are implementing Robbin’s or NPMLE estimators in context.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159119</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lifting 2D Vision Models into Structured Scene Representations</title>
<link>https://hdl.handle.net/1721.1/159118</link>
<description>Lifting 2D Vision Models into Structured Scene Representations
Tang, George
Intelligent agents can leverage structured scene representations capable of capturing object compositionality, affordances, and semantics as a world emulator. However, 3D scene data is limited, rendering supervised and self-supervised methods ineffective. Recent advances in 2D foundation models exhibit remarkable performance and generalization. Concurrently, several works have demonstrated lifting feature maps produced by these models into a 3D feature representation. This thesis further explores how lifting can be effectively employed to construct pixel-level fidelity structured scene representations.&#13;
&#13;
Learned scene representations such as NeRF and Gaussian Splatting do not support additional functionality besides novel view rendering. The world is compositional: a scene can be described in terms of objects. Correspondingly, we present a lifting solution for efficient open-set 3D instance segmentation of learned scene representations. Compared to previous approaches, our solution is more than an order of magnitude faster and can handle scenes with orders of magnitude more instances.&#13;
&#13;
Toward identifying affordances, we tackle the problem of zero-shot mesh part segmentation. Learning-based mesh segmentation does not generalize due to a lack of diverse mesh segmentation datasets, while traditional shape analysis methods are overfitted to previous benchmarks. We present a lifting solution for mesh part segmentation that overcomes these limitations, showing comparable performance to top-performing shape-analysis methods on traditional benchmarks while exhibiting much better generalization on a novel mesh dataset curated from an image-to-3D model.&#13;
&#13;
Beyond feature fields, lifting can be used for a variety of applications, including scene understanding and editing. However, current lifting formulations are inefficient and often exhibit additional unintended modifications. To address these deficiencies, we generalize lifting to semantic lifting, which incorporates per-view masks indicating relevant areas. These masks are determined by querying corresponding per-view feature maps derived from feature fields. However, it is impractical to store per-view feature maps, and the scene representations can be expensive to store and query. To enable lightweight, on-demand retrieval of pixel-aligned relevance masks, we introduce a Vector Quantized Feature Field. We demonstrate the effectiveness of semantic lifting with our method on complex indoor and outdoor scenes from the LERF dataset.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159118</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Affordance-Based Generation for 3D Generative AI</title>
<link>https://hdl.handle.net/1721.1/159117</link>
<description>Toward Affordance-Based Generation for 3D Generative AI
Wang, Sean
Recent advances in 3D content creation with generative AI have made it easier to generate 3D models using text and images as input. However, translating these digital designs into usable objects in the physical world is still an open challenge. Since these 3D models are generated to be aesthetically similar to their inputs, the resulting models tend to have the visual features the user desires but often lack the functionality required for their use cases. This thesis proposes a novel approach to generative AI in 3D modeling, shifting the focus from replicating specific objects to generating affordances. We trained models that allow users to create point clouds that satisfy physical properties called affordances, which are properties that describe how an object should behave in the real world. By ensuring that the generated objects have the expected affordances, we explore how existing tools can be augmented to generate 3D objects whose functionality is consistent with their appearances.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159117</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Fine-Tuning Techniques for Removing&#13;
Tamper-Resistant Safeguards for Open-Weight LLMs</title>
<link>https://hdl.handle.net/1721.1/159116</link>
<description>Exploring Fine-Tuning Techniques for Removing&#13;
Tamper-Resistant Safeguards for Open-Weight LLMs
Zhang, Sarah
Open-source models present significant opportunities and risks, especially in dual-use scenarios where they can be repurposed for malicious tasks via adversarial fine-tuning. In this paper, we evaluate the effectiveness of Tampering Attack Resistance (TAR), a safeguard designed to protect against such adversarial attacks, by exploring its resilience to full-parameter and parameter-efficient fine-tuning. Our experiments reveal that while TAR enhances tamper resistance compared to models without safeguards, it remains susceptible to variability. Specifically, we observe inconsistencies where the same adversarial attack can succeed under some initializations and fail under others. This is a critical security risk as even a single instance of failure can lead to models being exploited for harmful purposes. These findings highlight the limitations of current tamper-resistant safeguards and emphasize the need for more robust safeguards to ensure the safe and ethical deployment of open-source models.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159116</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The SpaseCroissant Oven: Automatic Metadata Generation For Open-Source Space Weather Datasets</title>
<link>https://hdl.handle.net/1721.1/159115</link>
<description>The SpaseCroissant Oven: Automatic Metadata Generation For Open-Source Space Weather Datasets
Chen, Edenna H.
The rise of machine learning (ML) algorithms has led to a parallel rise in ML-ready datasets. A novel metadata schema released by OpenAI and MLCommons called Croissant, which is specifically designed for ML-ready datasets, aims to increase data accessibility, user understanding of data, and accuracy of claims based on data. However, current methods to automatically generate Croissant metadata present difficulties, such as involving manual entries. This can be especially difficult when attempting to preserve information about large ML-ready datasets, which are often derived from large scientific repositories belonging to organizations such as National Aeronautics and Space Administration (NASA). These major scientific repositories provide their own metadata standards, such as NASA’s Space Physics Archive Search and Extract (SPASE) schema but context from this metadata can often be lost during data processing. This thesis presents a novel, improved approach to Croissant metadata generation which involves a hybrid parsing logic and Large Language Model (LLM) inference approach, as well as recommendations for future Croissant standards and SPASE to Croissant schema metadata conversion, that aims to retain this lost context.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159115</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applied Plankton Image Classification for Imaging FlowCytobot Data</title>
<link>https://hdl.handle.net/1721.1/159114</link>
<description>Applied Plankton Image Classification for Imaging FlowCytobot Data
Duckworth, Barbara R.
As the ability to gather vast quantities of data from oceanographic bioimaging sensors increases, so too does the need to process, analyze, and store that data in a consistent, standard way that enables replicability and accessibility for future studies. The Imaging FlowCytobot (IFCB), an automated submersible flow cytometer, produces high resolution images of plankton at rates up to 10 Hz for months or years, resulting in billions of images. This project compares various methods to categorize incoming images of plankton gathered by the IFCB - Convolutional Neural Nets (CNNs), Vision Transformers (ViT), and self-supervised learning (MAE). The benefits and downsides of each model are analyzed and discussed for future IFCB operators to process their data using the methods that best align with their research questions, along with step-by-step explanations about the pros and cons of each method depending on the use case.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159114</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Tsirelson's Theorem for All Compiled Nonlocal Games</title>
<link>https://hdl.handle.net/1721.1/159113</link>
<description>A Computational Tsirelson's Theorem for All Compiled Nonlocal Games
Falor, Chirag
Nonlocal games, defined as cooperative tasks between spatially separated players, have been a foundational tool in the study of quantum advantage and have been useful in classically verifying quantum computations. To address the challenge posed by the spatial separation assumption, Kalai et al. (STOC' 23) introduced a compilation procedure that compiles any nonlocal game into an interactive game between a classical verifier and a computationally bounded quantum prover. This compilation preserves classical soundness and quantum completeness, though quantum soundness has been established only in the asymptotic limit of the security parameter or for specific classes of games. In this work, we advance towards a concrete framework to bound the quantum value of compiled nonlocal games. Building on the notion of nice sum-of-squares certificates, introduced by Natarajan and Zhang (FOCS' 23) to bound the value of the compiled CHSH game, we extend the niceness framework and construct a hierarchy of semidefinite programs that searches exclusively over nice certificates. We show that this hierarchy converges to the optimal quantum value of the game. Additionally, we present a transformation to make any degree-1 sum-of-squares certificate nice. This approach provides a systematic method to reproduce known bounds for special classes of games and showcases the general applicability of the framework to low-degree certificates. Source code: https://github.com/chiragfalor/&#13;
Nice-SoS-SDP
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159113</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instructify: Demystifying Metadata to Visual Instruction Tuning Data Conversion Supplementary Materials</title>
<link>https://hdl.handle.net/1721.1/159112</link>
<description>Instructify: Demystifying Metadata to Visual Instruction Tuning Data Conversion Supplementary Materials
Hansen, Jacob A.
Visual Instruction Tuning (VisIT) data, commonly available as human-assistant conversations with images interleaved in the human turns, are currently the most widespread vehicle for aligning strong LLMs to understand visual inputs, converting them to strong LMMs. While many such VisIT datasets are available, most of them are constructed via ad hoc techniques, separately proposed by different groups, commonly poorly documented, without available (reproducible) code, and employing paid closed-source model APIs like GPT-4, Gemini, or Claud to convert image metadata (labels) to VisIT instructions. This incurs significant cost and difficulty to scale, improve quality, or produce VisIT data for new datasets. In this work, we address these challenges and propose an open and unified recipe and approach, Instructify, for converting available metadata to VisIT instructions using open LLMs. Our multi-stage Instructify features an efficient framework for metadata grouping, quality control, data and prompt organization, and conversation sampling. We show that our approach can reproduce or improve the data quality of the available VisIT datasets when applied to the same image data and metadata sources, improving GPT-4 generated VisIT instructions by ∼3% on average and up to 21% on individual benchmarks using open models, such as Gemma 2 27B and LLaMa 3.1 70B. We further show that our approach enables effective performance scaling (in terms of resulting LMM performance on a large variety of benchmarks) of the produced VisIT data both in terms of quantity and quality. In addition, we explore the impact of multiple factors, including conversation format, base model selection, and resampling strategies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159112</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Single-Cell ATAC-Seq for Genomic Language&#13;
Models and Multimodal Foundation Models</title>
<link>https://hdl.handle.net/1721.1/159110</link>
<description>Leveraging Single-Cell ATAC-Seq for Genomic Language&#13;
Models and Multimodal Foundation Models
Kim, Dong Young
Single-cell Assay for Transposase-Accessible Chromatin using sequencing (scATAC-seq) has emerged as a powerful tool for profiling chromatin accessibility at single-cell resolution. By capturing epigenomic landscapes, scATAC-seq provides critical insights into the regulatory elements that govern gene expression. However, the sparsity of scATAC-seq data, resulting from its low sequencing depth relative to the genome’s potential complexity, poses significant challenges for effective and accurate modeling. To advance the utility of scATAC-seq in modern biology, we explore its integration into deep learning frameworks through two innovative applications. First, we demonstrate how incorporating scATAC data enhances the performance of existing genomic language models by providing complementary context about chromatin accessibility. Specifically, we introduce scATAC to improve SegmentNT, a DNA segmentation model that leverages the Nucleotide Transformer (NT) to predict 14 types of genomic and regulatory elements from DNA sequences up to 30kb at single-nucleotide resolution. Second, we introduce a novel multimodal foundation model that extends existing scRNA-seq foundation models by integrating scATAC-seq data. This model captures crossmodal relationships between gene expression and chromatin accessibility, establishing a unified framework that can be fine-tuned for diverse downstream tasks, including cell type classification and cross-modal imputation. Our work highlights the potential of incorporating scATAC-seq data into existing genomics deep learning strategies, providing a framework for integrating regulatory DNA analysis more seamlessly into genomic modeling.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159110</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unsupervised Time Series Anomaly Detection Using Time Series Foundational Models</title>
<link>https://hdl.handle.net/1721.1/159109</link>
<description>Unsupervised Time Series Anomaly Detection Using Time Series Foundational Models
Nguyen, Linh K.
The rapid generation of time series data across a wide array of domains—such as finance, healthcare, and industrial systems—has made anomaly detection a critical task for identifying irregular patterns that could signal significant events like fraud, system failures, or health crises. Traditional approaches to time series anomaly detection, including statistical models like ARIMA and deep learning methods, have proven effective but often require an extensive training phase, which can be both data and time-consuming. In recent years, the emergence of foundational models, including large language models (LLMs) and specialized time series models, has opened up new possibilities for anomaly detection. These models, pre-trained on vast and diverse datasets, offer the potential to perform tasks with minimal task-specific training. This thesis investigates the feasibility of leveraging these foundational models for time series anomaly detection, with the aim of determining their effectiveness in detecting anomalies without the traditional training requirements. We also aim to investigate whether foundational models pretrained specifically on time series data yield better results compared to large language models (LLMs) that were not pretrained for time series tasks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159109</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-Person Teleoperation of a Bimanual Robotic System</title>
<link>https://hdl.handle.net/1721.1/159108</link>
<description>First-Person Teleoperation of a Bimanual Robotic System
Thakur, Nandini
First-person teleoperation of robots is a large field of research that could serve many benefits for automation. Teleoperation is a popular method to collect demonstrations for imitation learning that are easily learned by the robot, and thus it’s important to create teleoperation systems that are intuitive and enable human-like perception of a scene. Adding a first-person component to basic teleoperation systems is key to improving operators’ visual perception and making teleoperation possible for extended periods of time. Existing teleoperation systems do not integrate elements that provide the operator with a good perception of the task space, such as a first-person VR view and the ability to leverage the neck to search around the space. They rely on techniques such as third-person view of the space, or provide a first-person view but without the ability to move the neck to look around. This thesis proposes a VR-based teleoperation system with an actuated 5-DoF neck for enabling human-like perception and improving the ability to perform high quality demonstrations for use in imitation learning.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159108</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Embedded Tiny Machine Learning (SETML): A General&#13;
Framework for Embedded Distributed Inference</title>
<link>https://hdl.handle.net/1721.1/159107</link>
<description>Scalable Embedded Tiny Machine Learning (SETML): A General&#13;
Framework for Embedded Distributed Inference
Vidal, Justice
The growth of machine learning applications has increased the necessity of lightweight, energyefficient solutions for resource-constrained devices such as the STM32C011F6 microcontroller. However, such devices struggle with supporting larger models even after miniaturization techniques such as quantization and pruning. To facilitate machine learning inference on such devices, this work introduces Scalable Embedded Tiny Machine Learning (SETML), a general framework for distributed machine learning inference on microcontrollers. Furthermore, the framework is designed to be compatible with sensor-based applications that can take advantage of small hardware, such as gesture recognition, by testing binary size constraints with an accelerometer and its supporting library. This work evaluates the latency, power consumption, and cost trade-offs of using multiple small and efficient devices versus a larger device. The STM32C011F6 microcontroller is used as the primary hardware in the tested device network, while evaluation of the system is done in comparison with a device using a similar core processing element, the Seeeeduino XIAO SAMD21.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159107</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of the energy transfer network in upconverting nanoparticles</title>
<link>https://hdl.handle.net/1721.1/159106</link>
<description>Investigation of the energy transfer network in upconverting nanoparticles
Zheng, Yuxuan
Upconverting nanoparticles (UCNPs) have emerged as promising luminescent materials for a wide range of applications, including bioimaging, drug delivery, and photovoltaics. The intricate network of energy transfer processes within UCNPs enables their unique ability to convert low-energy infrared (IR) radiation into higher-energy visible light through photon upconversion, presenting significant challenges for accurate modeling. Despite their broad applications, theoretical models of UCNPs remain incomplete, and current models fail to accurately reproduce all experimental results. This thesis presents a comprehensive comparison of prevalent modeling approaches with the aim of developing improved models that more faithfully reproduce experimental observations. Using the Judd-Ofelt theory, we calculated essential transition rate parameters, including electric dipole (ED), magnetic dipole (MD), multiphonon relaxation (MPR), and energy transfer (ET), using constants sourced from the literature. We implemented both Monte Carlo models and Ordinary Differential Equation (ODE) models. Using the calculated rate parameters, we simulate the energy transfer pathways in Yb³⁺-Er³⁺ and Yb³⁺-Tm³⁺ UCNPs. Simulation results from all models were compared with experimental data to evaluate their effectiveness in capturing key luminescent properties such as population evolution, lifetime, saturation curves, and spectral purity.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159106</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Economic Engineering of Personalized Experiences</title>
<link>https://hdl.handle.net/1721.1/159105</link>
<description>The Economic Engineering of Personalized Experiences
Haupt, Andreas A.
Consumer applications employ algorithms to deliver personalized experiences to users, among others, in search, e-commerce, online streaming, and social media, impacting how users spend their time and money. This dissertation studies the design of such personalization algorithms and the economic consequences of their deployment.&#13;
&#13;
The first chapter focuses on the impacts of reward signal precision on online learning algorithms frequently used for personalization. Reward signals are precise when individual measurement is accurate and heterogeneity is low. While some algorithms, which we call "risk-averse", favor experiences that yield more precise reward signals and hence favor measurability and homogeneity, others, in the limit, choose experiences independently of the precision of their associated reward signals.&#13;
&#13;
The third chapter analyzes how preference measurement error differentially affects user groups in optimal personalization. If such measurement error is symmetric, welfare maximization requires delivering majority-preferred experiences at a rate beyond their proportion in the user population and hence increasing concentration. However, asymmetric preference measurement errors may arise due to users' actions to reduce measurement error. Participants in a survey of TikTok state that they engage in such costly actions.&#13;
&#13;
The fifth chapter studies, through the introduction of a new desideratum for market design, how to achieve personalization without infringing on user privacy. Contextual privacy demands that all (preference) information elicited by an algorithm is necessary for computing an outcome of interest in all possible configurations of users’ information. This property is demanding, as it requires that no two pieces of information can jointly but not unilaterally influence the outcome. Algorithms can protect the privacy of users who are queried late and whose information is not used to compute public statistics of the user population, hence achieving the relaxed notion of maximal contextual privacy.&#13;
&#13;
Two brief chapters introduce new models of human-machine interaction. The first examines the design of generative models, while the second proposes stated regret of past consumption as a new data modality and presents a corresponding data collection tool.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159105</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creating Links: Building an Educational Platform to Ask Relevant Questions in Education</title>
<link>https://hdl.handle.net/1721.1/159104</link>
<description>Creating Links: Building an Educational Platform to Ask Relevant Questions in Education
García Bulle Bueno, Bernardo
In this thesis, I document the findings and process through which we built an educational platform (JANN) to do research while having a positive impact on a community. Through JANN we have coordinated more than 100k hours of tutoring sessions and built (to our knowledge) one of the largest databases of educational recordings in the world. Broadly the contributions here are twofold: first, we demonstrate the research potential building a platform can offer. Second, using our educational platform, we pursue novel questions in the field of education with granular information that is traditionally inaccessible for research.&#13;
&#13;
After introducing the work and describing the construction of the platform, the first chapter details an RCT where we show the effect of receiving tutoring on Math performance. Second, we document how we built an estimator of emotions using audio. The estimator was further validated on our dataset and then used to show that activating emotions are related to better class quality. Third, we document an RCT where Math tutors were asked to dedicate some time per week to teach Socioemotional learning skills. We show that this had a positive effect on learning. Moreover, it also caused tutors to teach longer Math classes. Students showed more trust in their tutors, and ultimately the classes had a higher prevalence of positive emotions. Finally, we also study doing causal inference on observational data on another platform. Using Facebook data we study digital groups and through a regression discontinuity design we find that joining a group has a positive effect on making new friends and can diversify a person's connections in terms of income. &#13;
&#13;
Overall, we find that building a platform, can broaden the granularity of the data one has access to, make research more scalable, and ultimately also have a positive effect on a community.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159104</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Blood-Based Laboratory Diagnostics for Alzheimers’s Disease: A Systems Approach</title>
<link>https://hdl.handle.net/1721.1/159103</link>
<description>Assessing Blood-Based Laboratory Diagnostics for Alzheimers’s Disease: A Systems Approach
Peralta Walker, Stephanie Christine
This thesis adopts a systems approach to analyze the complex network of stakeholders involved in adopting blood-based laboratory screening tests for Alzheimer’s disease (AD). Traditional diagnostic methods, including cerebrospinal fluid (CSF) testing and positron electron tomography (PET) brain imaging, are invasive, costly, and inaccessible to many. Blood-based tests offer a less invasive and more cost-effective alternative, yet they remain underutilized in clinical practice. By conducting a literature review, stakeholder interviews, and a Kano analysis, the thesis identifies and evaluates the key stakeholder needs to support the widespread adoption of these tests, such as the need for demonstrated clinical performance of these tests, reimbursement, broader education of patients and health care professionals, and safe, effective medicines to treat AD. The research highlights two emerging tests that have published studies demonstrating clinical validation, a key parameter of clinical performance. A stakeholder tension analysis is included with proposed tension resolutions using stakeholder saliency to guide prioritization. Addressing these stakeholder needs could facilitate broader implementation, improve early diagnosis, and support emerging therapeutic interventions for AD, thus reshaping the diagnostic landscape for this increasingly prevalent disease.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159103</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Coast Guard Infrastructure Management: A Multi-Criteria Framework for Prioritizing Maintenance Projects</title>
<link>https://hdl.handle.net/1721.1/159102</link>
<description>Enhancing Coast Guard Infrastructure Management: A Multi-Criteria Framework for Prioritizing Maintenance Projects
Ballard, Zachary N.
The United States Coast Guard is currently transforming its decision-making process for prioritizing shore infrastructure maintenance and repair projects. Current decision-making subjectivity appears to be generating inadequate project prioritizations. Stakes are high for an aging infrastructure portfolio in harsh coastal conditions, with increased national reliance on the Coast Guard in a fiscally constrained budgetary environment. Data availability, quality, and fidelity continue to increase, supporting the rationale for more robust and data-informed decision-making frameworks. &#13;
&#13;
The research begins with examining Coastal and Shore Operations (CSO) funding history, along with a thorough description of the current Centralized Planned Obligation Prioritization (C-POP) process. The complex, sociotechnical nature of the problem is highlighted by identifying all involved stakeholders and categorizing them through the leading view of stakeholder theory and salience. A detailed review of the governing asset management literature is conducted, gradually narrowing from a broad, international, and asset-type neutral perspective to more tailored infrastructure cross-asset prioritization material. Requisite framework data substance, collection, and analyses are described, and recommendations for data processing improvements are made. &#13;
&#13;
Two leading prioritization models are examined: the Importance and Urgency Quadrant Model and the Value Focused Multi-Criteria Decision Model. Their respective data visualizations are generated and analyzed. Using the multi-criteria analysis rooted in multi-attribute utility theory, four portfolios of measurably increasing value are constructed, compared with a baseline portfolio reflecting actual project selections in December 2023. These portfolio iterations include a linear programming solution to the Knapsack Problem of selecting projects that maximize overall portfolio utility within a budget limit while incorporating some of the more social and qualitative system properties. &#13;
&#13;
A traceable, adaptable, defendable, and objective data-informed multi-criteria framework is proposed, which aims to facilitate the effectiveness of the overall Coast Guard shore infrastructure portfolio in the long term.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159102</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems-Theoretic Approach to Organizational Design and Analysis</title>
<link>https://hdl.handle.net/1721.1/159101</link>
<description>A Systems-Theoretic Approach to Organizational Design and Analysis
Gutierrez, Lauren E.
A significant challenge for large organizations lies in organizational design, particularly for public sector bureaucracies and the largest of industry’s private firms. Organizations tend to turn to organizational design improvements when facing effectiveness and efficiency issues. Unfortunately, these large organizations struggle with organizational design because of the sheer size and complexity of their organization which results in a fragmented and often times faulty approach to improving their organization. Organizations, at their core, are a special type of system or a set of components that operate or work together to achieve some common purpose. Organizations are purely social systems in that their elements are not technical or engineered. &#13;
&#13;
Systems Theory provides a lens through which these types of social systems can be studied. Just like in engineered systems, an organization's emergent behavior is determined by its internal elements' complex interactions. Traditional organizational design and analysis methods focus on optimizing these internal elements in the hopes of re-integrating optimized elements in pursuit of organizational-level optimal behavior. Just like in traditional systems engineering, component-level optimization does not yield system-level optimal behavior. &#13;
&#13;
This thesis codifies a systems-theoretic approach to organizational design and analysis using the language of Systems Theory and the semantics of Systems-Theoretic Accident Model and Processes. By extending traditional Systems-Theoretic Process Analysis (STPA), a tool for hazard analysis used primarily for engineered systems, this work refines STPA’s concepts and terminology to be more accessible for analyzing social systems. Building off this extension, this thesis leverages a contemporary Department of Defense reorganization effort as a case study, illustrating Systems-Theoretic Organizational Design and Analysis (STAODA) as a tool to assess organizational design options.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159101</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence Tools, Curricula, and Agents for Creative Learning</title>
<link>https://hdl.handle.net/1721.1/159100</link>
<description>Artificial Intelligence Tools, Curricula, and Agents for Creative Learning
Ali, Safinah Arshad
Children's early development of creativity contributes to their learning outcomes and personal growth. However, as children enter formal schooling systems, their creativity declines. Although Artificial Intelligence (AI)-powered tools for K-12 learning hold immense potential for reducing barriers to creative expression, access to these AI tools and AI knowledge among K-12 students and educators remains inequitable to children from groups underrepresented in STEM. In this thesis, I explore how AI, as an emerging creative medium, can be made more accessible to all young creators. I explore two mechanisms of making a mode of creation more accessible: Creative AI literacy materials for diverse classrooms and AI agentic interactions for scaffolding creative expression for diverse learners. &#13;
&#13;
Utilizing literacy as a mode of making Creative AI tools accessible, I outline the design and evaluation of various Creative AI curricula that I have developed for diverse groups of K-12 students and teachers. To adapt AI learning to art classrooms, I co-developed the AI and Art curriculum with creative educators, designed specifically for use in creative classrooms with creative educators and learners. I implemented the curriculum with 94 middle and high school students across six week-long sessions. I report findings from teacher co-design sessions and students’ learning experiences. Teachers designed learning objectives and AI tools for their classrooms. Students gained knowledge and skills in art concepts, AI concepts, and the application of art in AI. Students also demonstrated significant shifts in their attitudes towards using AI in the creative process, and their sense of belonging in both AI and art communities was heightened. I discuss how AI curricula can be adapted to diverse disciplines and how art can serve as a meaningful avenue for students to engage with AI concepts. &#13;
&#13;
Utilizing social interaction from AI agents as a mode of fostering creative expression in children with neurodevelopmental disorders, I designed and applied inclusive child-robot interactions for collaborative creativity, where 32 elementary school children and a social robot collaboratively created picture stories. The robot provided creativity scaffolding during different parts of the creative storytelling process through social interactions such as feedback, question-asking, divergent thinking, and positive reinforcement, while personalizing the scaffolding to meet the unique needs of neurodivergent children. I investigated the impact of the social robot on children’s exhibited creativity and their emergent creative collaborative interactions in storytelling over multiple sessions. Inclusive design practices eliminated creative barriers for children with neurodevelopmental disorders, and the robot's creativity scaffolding interactions positively influenced children’s creative product and creative process in storytelling. I propose Inclusive Co-creative Child-robot Interaction (ICCRI) guidelines for fostering creativity in children with neurodevelopmental disorders, and accommodating diverse creator styles in complex, open-ended creative tasks.&#13;
&#13;
In this thesis, I contribute curricula, learning tools, child-robot interactions, and findings from examining long-term child-AI co-creative interactions. I discuss design implications for integrating AI tools, curricula and agents in creative learning environments. This thesis is a step towards empowering all children with powerful modes of creation, while helping them be responsible creators, thinkers and citizens in an AI-driven future.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159100</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Deep Learning Systems for Visual Perception on&#13;
the Edge</title>
<link>https://hdl.handle.net/1721.1/159099</link>
<description>Efficient Deep Learning Systems for Visual Perception on&#13;
the Edge
Yang, Shang
Deep learning for visual perception on edge devices has become increasingly critical, driven by emerging applications in autonomous driving and AR/VR. Typically, sparse convolution on 3D point clouds and Visual Language Models (VLMs) for image processing are two important methods for visual understanding and reasoning. However, the limited compute resources and memory on edge devices pose significant challenges, necessitating specialized system support for deep learning models. Specifically, the efficiency challenges for edge visual perception are twofold: First, the sparsity and inherent irregularity of point cloud data introduce substantial complexity for parallel processing. Second, the colossal model sizes and amount of computation of LLMs and VLMs render edge deployment particularly challenging. In this thesis, we aim to address the efficiency issues of on-device deep learning via system-algorithm co-design. We first introduce TorchSparse++, a high-performance inference engine for sparse convolution on GPUs. Unlike existing sparse convolution systems, TorchSparse++ well balances the efficiency and implementation simplicity, achieving the best performance across different application scenarios. Specifically, we first create a highly efficient Sparse Kernel Generator that generates performant sparse convolution kernels at less than one-tenth of the engineering cost of the current state-of-the-art system. On top of this, we design the Sparse Autotuner, which extends the design space of existing sparse convolution libraries and searches for the best dataflow configurations for training and inference workloads. Consequently, TorchSparse++ achieves 2.9×, 3.3×, 2.2× and 1.7× measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is 1.2-1.3× faster than SpConv v2 in mixed precision training across seven representative autonomous driving benchmarks. It also seamlessly supports graph convolutions, achieving 2.6-7.6× faster inference speed compared with state-of-the-art graph deep learning libraries. Furthermore, to democratize the power of large foundation models in edge AI, we propose AWQ and TinyChat, a hardware-friendly full-stack solution for efficient on-device LLM and VLM deployment. AWQ is a novel quantization method based on the insight that not all weights in an LLM are equally important. Protecting only 1% salient weights can greatly reduce quantization error. Specifically, AWQ employs an equivalent transformation and scales up the salient weight channels to reduce the weight quantization error, during which the scale is determined by collecting the activation statistics offline. Alongside AWQ, we further introduce TinyChat, an efficient and flexible inference framework tailored for 4-bit on-device LLM/VLMs. With on-the-fly dequantization, extensive kernel fusion and platform-aware weight packing, TinyChat offers 2.7-3.7× speedup over the Huggingface FP16 implementation on both desktop and mobile GPUs. It also enables the deployment of the 70B Llama-2 model on mobile GPUs. Together, these techniques significantly reduce the computational and memory costs for deploying deep learning models on edge devices, increasing the accessibility of deep learning for practical application. We hope that this thesis can inspire future research on efficient edge AI across diverse modalities.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159099</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnosing Supply Chain Threats to Defense Innovation</title>
<link>https://hdl.handle.net/1721.1/159098</link>
<description>Diagnosing Supply Chain Threats to Defense Innovation
Schneider, Donald E.
As the U.S. Department of Defense (DoD) shifts focus to an era of global power competition, the demand for rapid innovation and disruptive technologies has grown significantly. Prototyping remains a vital tool for advancing technological innovation, enabling early learning and risk reduction in developing complex systems. However, persistent supply chain challenges threaten the success of defense prototyping projects, causing schedule delays, and diminished effectiveness. &#13;
This research identifies the underlying causes of supply chain disruptions specific to Federal Acquisition Regulations (FAR) governed prototyping efforts, offering a socio-technical systems analysis that accounts for stakeholder relationships, market dynamics, and regulatory frameworks. Through extensive data collection, including stakeholder interviews across agencies, organizations, and supply chain roles, 181 issues were identified and analyzed, revealing over 500 contributing factors. The disciplined analysis of these factors identified three systemic root causes: (1) the misapplication of production management strategies that focus on efficiencies at scale and low tolerance for risk; (2) pooled supply chain management functions, which marginalizes prototyping’s unique demands and creates inefficiencies; and (3) regulatory and organizational barriers to entry that deter non-traditional suppliers, hindering innovation.&#13;
To address these systemic challenges, the thesis recommends restructuring organizations to better align with the unique demands and risks of prototyping while simultaneously creating pathways to reduce barriers for new suppliers. Resolving these issues will require a coordinated effort across the prototyping ecosystem. By addressing these root causes, the DoD can improve the efficiency and effectiveness of prototyping programs, ultimately sustaining U.S. technological superiority in an increasingly competitive global environment.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159098</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A framework for determining remote sensing capabilities for ecosystem services valuation</title>
<link>https://hdl.handle.net/1721.1/159097</link>
<description>A framework for determining remote sensing capabilities for ecosystem services valuation
Sampath, Aparajithan
Nature provides vital services—clean water, air purification, and climate regulation—to human societies thanks to the "natural capital" like forests and lakes on our planet. Accurately measuring and valuing these ecosystem services is crucial for informed economic and development decisions. Remote sensing (RS) technology offers a powerful way to monitor natural capital (e.g., mapping forest cover, assessing water quality). However, current data lack the accuracy and precision needed for robustly monitoring the value of these services. This deficiency has impeded the use of natural capital assessment data in economic decision-making. This research partly addresses this challenge by developing a new framework to investigate the necessary sensor characteristics (spectral, radiometric, temporal, spatial) for effectively monitoring natural capital and quantifying ecosystem services. The framework first identifies the different types of services provided by an ecosystem, then uses a physics-based approach to identify crucial physical parameters and determines the necessary measurements that need to be made from a sensor for their quantification. The sources of uncertainty impacting quantification and value estimation are also analyzed in detail. The approach is integrated to formulate a system utility function that is used to compare performance of existing and proposed RS systems, and the overall results are subsequently used in proposing required capabilities for future remote sensing systems for natural capital monitoring. The framework is demonstrated on a case study focused on the flood attenuation function (service) provided by wetlands. Water budget models are utilized to identify essential parameters for monitoring water storage by wetlands. Using a study area encompassing the Fall Lake Creek reservoir (Oregon, USA), water storage capacity is measured and monitored by integrating USGS digital elevation models with Sentinel-1 synthetic aperture radar, Sentinel-2 optical data, and Planet Scope optical data. Results are validated against USGS published ground truth measurements. A strong correlation (r² of 0.95) was observed with all three datasets. An uncertainty analysis was conducted, using the random fields method, in which synthetic spatially autocorrelated errors were added to the RS datasets. Radiometric uncertainties were studied through addition of gaussian noise as a percentage of reflectance values, and results showed effects of &lt; 2.5% on estimated water volume. Elevation data uncertainties (which were approximated to simulate uncertainties in globally available DEMs) showed higher effects, and errors in estimated storage volumes increased proportionally. A study of inundation (for a case study over Miami, FL) revealed that as the root mean square error of the DEMs increased from 2m to 7 m, the risk of flooding (defined as water depth accumulation of greater than 90 cm) increased more than 3 times. A utility function was developed to evaluate sensors based on their ability to estimate wetland water volumes. This function considers sensor characteristics like spatial, radiometric, and temporal resolution. Notably, the function estimates that a future optical system with 2x improved spatial and 4x improved temporal resolution (compared to Sentinel-2) can increase utility 7-fold.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159097</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images</title>
<link>https://hdl.handle.net/1721.1/159096</link>
<description>The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images
Kishnani, Deepali
This thesis explores how the uncanny valley phenomenon—historically tied to near-human robots—applies to text-based AI interactions and AI-generated images. While the concept has been predominantly studied in the context of robotics, the advent of generative AI reveals that text and visuals that are 'almost, but not quite' human can also provoke unease. &#13;
&#13;
Two experiments structure the study. The first examines GPT4-Turbo (GPT4o) text conversations. Sixty participants engaged with one of three “chatbots”: an “Uncanny-Valley Bot” (prompt engineered to fall in the uncanny valley), a “Human-Like Bot” (prompt engineered to converse like humans), or a human control. Godspeed Questionnaire results indicate that the uncanny valley effect surfaces in text-only form: participants consistently rated the “Uncanny-Valley Bot” lowest in anthropomorphism, animacy, likeability, and perceived intelligence. Furthermore, the experiment revealed that the distinction between GPT and humans is becoming increasingly blurred, with 60% of participants mistaking a human for GPT and 40% mistaking GPT for a human. Lastly, results highlighted a strong user preference for naturalness, human imperfections, and vulnerability. While human flaws enhance relatability, deviations that disrupt perceived humanity trigger the uncanny valley.&#13;
&#13;
The second experiment investigates AI-generated images produced by Stable Diffusion XL at varying degrees of realism. Fifty-six participants ranked each image’s “strangeness,” revealing that highly realistic or clearly stylized outputs raise fewer concerns. By contrast, images that inhabit the uncanny valley elicited discomfort. To quantify these findings, recognized metrics like Frechet Inception Distance (FID) and Kernel Inception Distance (KID) were used to compare real and AI-generated images. Both metrics strongly correlated with human perceptions, suggesting that distance metrics can be used to determine realism. The study also shows that image generation models can detect visual features associated with the uncanny valley. However, performance drops when the prompt calls for subtle, “mid-range” realism, indicating the model’s difficulty in maintaining comfort and believability at intermediate levels.&#13;
&#13;
Collectively, the two experiments confirm that uncanny valley responses are not confined to physical robots but persist in text-based dialogue and AI-synthesized images. Yet challenges remain. Short interaction windows, small participant samples, and reliance on selected AI models call for studies on the generalizability of these findings. Future work should adopt longitudinal designs, larger samples, and multiple AI systems. Addressing the uncanny valley in both textual and visual content is essential for advancing user trust, and comfort in AI.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159096</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Higher-Order Interactions in Social Systems</title>
<link>https://hdl.handle.net/1721.1/159095</link>
<description>Higher-Order Interactions in Social Systems
Sarker, Arnab Kumar
The de facto representation of a social network is a graph— individuals are represented as nodes, and relationships between pairs of individuals are represented as edges. This results in a powerful abstraction by which social relationships can be systematically studied to understand emergent population-scale behavior. However, many social interactions occur in groups: three individuals may co-author a paper, a team of employees may collaborate on a task, a single tweet may mention four users. Breaking such interactions into a collection of pairwise relationships can oversimplify the rich social contexts in which these individuals know one another. This thesis explores a different paradigm of social network analysis, namely, using "higher-order" network models such as hypergraphs and simplicial complexes which can explicitly encode co-present contexts between three or more individuals. The first two projects describe how higher-order interactions can differ from pairwise interactions in terms of micro-level content and macro-level structure, respectively. The latter two projects then develop an applied mathematical toolkit for the algebraic topological analysis of higher-order interactions in social networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159095</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Battery Pack Design and Transient Performance Modeling&#13;
for High-Power Legged Robots</title>
<link>https://hdl.handle.net/1721.1/159094</link>
<description>Battery Pack Design and Transient Performance Modeling&#13;
for High-Power Legged Robots
Evagora, Christopher K.
Legged robotics has recently shifted toward advanced optimization-based control methods, such as Model Predictive Control (MPC), to generate agile and energy-efficient locomotion. By casting the control problem as an optimization task, robotic systems can account for complex robot dynamics and operational constraints, including joint limits and actuator capabilities. However, high-performance maneuvers also demand rigorous consideration of onboard battery constraints. This work presents an empirically derived lithium-ion battery model that captures transient voltage sag and time-dependent internal battery state, enabling more accurate prediction of feasible power delivery. Additionally, a custom high-power battery pack was designed to meet the power demands of the MIT Humanoid, emphasizing power density, safety, and maintainability. Although the work presented in this thesis does not integrate the battery model into a trajectory optimization framework, it establishes the foundation for future research that aims to couple battery and robot dynamics in robot control. Ultimately, this approach will facilitate safer and more capable legged robots by ensuring that planned trajectories respect both physical and electrochemical constraints.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159094</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Graphical User Interface for 3D Model&#13;
Fabrication Through Generative AI</title>
<link>https://hdl.handle.net/1721.1/159092</link>
<description>Multimodal Graphical User Interface for 3D Model&#13;
Fabrication Through Generative AI
Báez Alicea, Isabel
In recent years, three-dimensional model generation and manipulation through generative AI has seen significant developments. Current projects enable the generation of threedimensional assets from natural language prompts and input images, as well as functionalityaware model manipulation through mesh segmentation and categorization. However, all these workflows lack a coherent, unified platform that caters to users’ needs and each method’s technologies. Programs that rely on terminal-based commands lack the graphics needed for model interactions, and plugin extensions for 3D modeling applications are unintuitive and hard to extend for new functionalities. Additionally, both approaches require users to have prior computer engineering and/or 3D graphics knowledge. For this thesis, I propose the creation of a web-based, multimodal graphical user interface that consolidates all these different technologies in a single platform. By supporting model stylization and model generation (both from text prompts and input images), users can utilize combined workflows and expand the range of output possibilities for 3D asset creation. Other features in our interface include model uploading, saving, and downloading to enable a continuous stream of work on a single 3D asset. Apart from all this, we expand the current capabilities of existing image-to-3D generation programs by enabling users to combine up to six images together and create a merged 3D object. Each of these images corresponds to a view angle from which the outputted mesh will be built.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159092</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convergence of the Arnoldi Iteration for Estimating Extreme Eigenvalues</title>
<link>https://hdl.handle.net/1721.1/159091</link>
<description>Convergence of the Arnoldi Iteration for Estimating Extreme Eigenvalues
Chen, Cecilia
Krylov subspace methods, like the Arnoldi iteration, are a powerful tool for efficiently solving high-dimensional linear algebra problems. In this work, we analyze the convergence of Krylov methods for estimating the numerical range of a matrix. Prior bounds on approximation error often depend on eigenvalue gaps of the matrix, which lead to weaker bounds than observed in practice, specifically in applications where these gaps are small. Instead, we extend a line of work proving gap-independent bounds for the Lanczos method, which depend only on the matrix dimensions and number of iterations, to the more general Arnoldi case.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159091</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>GIM: Guidance as Initialization Method</title>
<link>https://hdl.handle.net/1721.1/159090</link>
<description>GIM: Guidance as Initialization Method
Duitama Cortes, Juan Sebastian
This work makes two contributions: the evaluation of early stop guidance for deep Fully Connected Networks (FCNs) and the introduction of guidance as an initialization method (GIM). Network initialization has been a meaningful and challenging topic in the field of machine learning (ML) for a long time. Many initialization methods exist, ranging from data-independent to data-dependent approaches. Initializations allow for a better understanding of model behavior and improvements in model performance. The novel guidance tool enabled us to propose GIM, a new technique that initializes a model by leveraging representational similarity with respect to models of different architectures. A model with an architecture that performs poorly in a specific task can be initialized with guidance from a model with an architecture that performs well in the respective task. We focus on the case of FCNs in the task of image classification and provide experimental results to validate our approach.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159090</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating Weather For A Mixed Reality Platform</title>
<link>https://hdl.handle.net/1721.1/159089</link>
<description>Simulating Weather For A Mixed Reality Platform
Ni, Hao
Complex systems are inherently difficult to teach in a traditional classroom setting. The We’re In This Together (WIT) project aims to provide a different teaching strategy by using AR/VR headsets to situate the students directly inside the system. WIT’s first game attempts to tackle common weather concepts including precipitation and fronts; however, the most recent version fails to demonstrate and model the concepts in an accurate and comprehensible way. This project focuses on developing a brand-new simulation layer for the game that better captures the causes behind common weather phenomena. The new simulation uses a particle-based approach to model the movement of air in the atmosphere and creates a more thorough and interactive experience to help students explore the various aspects of weather.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159089</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Realistic Tactile Stylization for Digital Fabrication using Enhanced UV Unwrapping Method</title>
<link>https://hdl.handle.net/1721.1/159088</link>
<description>Realistic Tactile Stylization for Digital Fabrication using Enhanced UV Unwrapping Method
Wong, Zoe
While recent advances in Generative AI enable visual stylization of 3D models using image prompts, they typically neglect tactile properties. TactStyle addresses this limitation by enabling creators to enhance 3D models with both visual and tactile properties derived from texture images. Using a fine-tuned image-generation model, TactStyle generates highly accurate heightfields that faithfully replicate the tactile properties of input visual textures and applies them to 3D models. However, applying textures to 3D models presents challenges, such as ensuring even texture resolution, avoiding texture warping, and minimizing visible seams. TactStyle’s current implementation often struggles with significant texture stretching and distortion caused by poor UV mapping, compromising the accurate heightfields and diminishing the tactile fidelity of printed models. Our research systematically evaluates various UV unwrapping methods, including alternative UV projections and an optimization-based neural UV mapping, to improve the realism and accuracy of texture application on 3D models in digital fabrication. Building on these findings, we will release a Blender plugin that integrates the optimal UV unwrapping methods with TactStyle, enabling creators to easily customize their 3D models with accurate tactile properties using only reference texture images. This work enhances the practicality and accessibility of tactile 3D model customization, bridging the gap between visual and tactile design elements.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159088</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying the Role of Transcription Factor RFX3 in 9PDeletion Syndrome</title>
<link>https://hdl.handle.net/1721.1/159087</link>
<description>Identifying the Role of Transcription Factor RFX3 in 9PDeletion Syndrome
Edwards, Lilly
9p deletion (9p-) syndrome is primarily characterized by intellectual disability, developmental delays, and autism. This project investigated how much of the neuronal phenotypes of 9p- syndrome could be attributed to RFX3, a transcription factor and autism risk gene. Bulk RNA-seq data of iPSC-derived neurons from patients with 9p- syndrome and CRISPRengineered cell lines was analyzed using Principal Component Analysis, Differential Gene Expression analysis, and Functional Enrichment analysis. The findings indicate that RFX3 plays a significant role but is not the sole driver of the neuronal phenotypes. SMARCA2, a gene linked to intellectual disability and part of the SWI/SNF complex, was identified as a direct target of RFX3 in the commonly deleted region of chromosome 9p. Notably, the combined deletion of RFX3 and SMARCA2 led to greater dysregulation of SMARCA2 expression and SWI/SNF complex components than the deletion of either gene alone. These findings highlight the potential synergistic effects of RFX3 and SMARCA2 in 9p- syndrome and suggest their combined disruption may underlie the neuronal phenotypes observed.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159087</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Model Editing for Unlearning in Large Language Models</title>
<link>https://hdl.handle.net/1721.1/159086</link>
<description>Investigating Model Editing for Unlearning in Large Language Models
Hossain, Shariqah
Data regulations on the Right to be Forgotten such as that in the General Data Protection Regulation (GDPR) of the European Union protect the right of users to remove private information from organizations. With the increasing usage and influence of large language models (LLMs) that are trained on personal data, a question of how to implement the removal of information within these models arises. In addition, large language models (LLMs) are trained on a large corpus of data that is usually scraped from the Web. A current challenge with ensuring reliable and safe outputs from LLMs is false, toxic, harmful or biased information from Web data that is captured in the knowledge of the model. Machine unlearning aims to remove unwanted information from a model, but many methods are inefficient for models with large numbers of parameters or fail to remove the entire scope of information without harming performance in the knowledge that is to be retained. Model editing algorithms solve a similar problem of changing information in LLMs, but they focus on redirecting inputs to a new target rather than removing that information altogether. Despite the parallels between model editing and unlearning, there has yet to be a thorough investigation of the potential of model editing approaches within this setting. In this work, we explore ROME, IKE, and WISE editing algorithms and design new editing targets for an unlearning setting. For evaluating the potential of the model editing algorithms, we focus on unlearning fictitious information using the Task of Fictitious Unlearning (TOFU) benchmark. Through this investigation, we show that model editing approaches can exceed the performance of current unlearning methods at removing information depending on the setting. They share the limitation of traditional unlearning of being unable to encapsulate the scope of what is to be unlearned without damage to overall model performance. We hope to leverage this information to improve methods for unlearning model knowledge and therefore improve the reliability of LLMs.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159086</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward An Explainable Electric Power Grid Operation Assistant Using Large Language Models</title>
<link>https://hdl.handle.net/1721.1/159085</link>
<description>Toward An Explainable Electric Power Grid Operation Assistant Using Large Language Models
Ravichandran, Anish
This thesis explores potential applications of LLMs for assisting the analyses and decisionmaking of complex electric power grid operators. The power grid is a critical piece of infrastructure currently challenged by increased electrification, integration of renewable energy sources, and distributed energy resources (DERs). Human operators struggle to process the massive amounts of data produced by modern smart grids and need innovative solutions to handle the increased complexity of operational decisions. This thesis investigates the potential role of Large Language Models (LLMs) in grid operation tasks, focusing on interpretability and generalizability while exploring how LLMs can assist operators by providing actionable insights and recommendations. Multiple versions of LLM agents were developed, including naive and tool-assisted designs, and were evaluated on the Learn to Run a Power Network (L2RPN) benchmark for steady-state and cascading failure scenarios. While the LLM agents performed better in scenarios requiring exploratory decision-making, they struggled in steady-state operation and were constrained by their integration with tools and the testing environment. This work was limited by compute constraints, which affected the choice of model and the length of evaluation scenarios, and future work is needed toward seamless interaction of LLMs and power systems simulators, however LLMs have the potential to transform future grid operation, paving the way for more resilient and sustainable energy sector of the 21st century.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159085</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs</title>
<link>https://hdl.handle.net/1721.1/159084</link>
<description>CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs
Skelić, Lejla
The role of Large Language Models (LLMs) has not been extensively explored in analog circuit design, which could benefit from a reasoning-based approach that transcends traditional optimization techniques. In particular, despite their growing relevance, there are no benchmarks to assess LLMs’ reasoning capability about circuits. Therefore, we created the CIRCUIT dataset consisting of 510 question-answer pairs spanning various levels of analog-circuit-related subjects. The best-performing model on our dataset, GPT-4o, achieves 48.04% accuracy when evaluated on the final numerical answer. To evaluate the robustness of LLMs on our dataset, we introduced a unique dataset design and evaluation metric that enable unit-test-like evaluation by grouping questions into unit tests. In this case, GPT-4o can only pass 27.45% of the unit tests, highlighting that the most advanced LLMs still struggle with understanding circuits, which requires multi-level reasoning, particularly when involving circuit topologies. This circuit-specific benchmark introduces a scalable and reliable automatic evaluation method, transferable to other reasoning domains, and highlights LLMs' limitations, offering valuable insights for advancing their application in analog integrated circuit design.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159084</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>ALFA-Chains: An Artificial Intelligence Approach to Exploit Chain Discovery in Networks</title>
<link>https://hdl.handle.net/1721.1/159083</link>
<description>ALFA-Chains: An Artificial Intelligence Approach to Exploit Chain Discovery in Networks
Tulla Lizardi, Miguel A.
Exploit chains play a crucial role in advanced persistent threats (APTs) and other malicious cyber campaigns. Sophisticated attackers can navigate across a network, escalate their privileges, and compromise valuable targets by executing the right exploits in the right order. However, finding these exploits chains is a challenging task requiring a broad knowledge of the vulnerabilities present in computer systems and the exploits that take advantage of them. Networks can be complex, with many hosts and intricate software stacks. Moreover, the range of known exploits and vulnerabilities is constantly growing, complicating the process of determining how they can be linked. This thesis introduces a solution, ALFA-Chains, that automates the discovery of exploit chains by leveraging classical AI planning, Large Language Models (LLMs), and existing exploit/vulnerability databases. ALFA-Chains describes networks and exploits using the Planning Domain Description Language (PDDL), a formal language to represent planning problems. This allows us to use optimized off-the-shelf planners that have been developed by the AI planning community over many years. Our system takes natural language descriptions of exploits and classifies them into categories based on their preconditions and effects. From this intermediary representation, we can programmatically generate PDDL that captures the requirements needed to run the exploit and the access gained by the attacker. Due to this automated approach, ALFA-Chains is able to consider a vast set of exploits when determining if a network is susceptible to exploit chaining. We show how ALFA-Chains can process 1,880 Metasploit exploits and their corresponding 2,002 CVEs to detect exploit chains in a variety of realistic network configurations. We proceed to discuss potential applications of ALFA-Chains, including automated penetration testing and vulnerability prioritization.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159083</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Inductive Biases of Conditional Diffusion Models</title>
<link>https://hdl.handle.net/1721.1/159081</link>
<description>On the Inductive Biases of Conditional Diffusion Models
Yu, Christina
Diffusion models have achieved remarkable progress in recent years across various domains and applications, but how diffusion models generalize is still not well understood. While prior work predominantly focuses on unconditional diffusion models, in this thesis we focus on understanding generalization for conditional diffusion models, which is especially relevant for modern text- or observation- conditioned applications. In particular, we are interested in the inductive biases of conditional diffusion models which predispose them to certain forms of interpolation in regions outside the support of the training data. We observe that neural networks are capable of learning qualitatively different forms of interpolation, which may be influenced by the architecture and capacity of the network and other aspects of neural network training. We develop a potential framework to model the interpolation behavior of neural networks via nonparametric estimation, which happens to have the property of being schedule consistent, or truly denoising at every time step. We find that, assuming a neural network with sufficient capacity, conditional diffusion models are biased towards smoothing, which can lead to non-schedule consistent behavior away from the training data and has a number of interesting consequences.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159081</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>All Pass Readout With Ring Resonators for Qubit Measurement</title>
<link>https://hdl.handle.net/1721.1/159080</link>
<description>All Pass Readout With Ring Resonators for Qubit Measurement
Zang, Alicia
Quantum computers may advance computing by solving some NP complexity problems, such as factoring and simulating quantum systems. Superconducting qubits, configurable artificial atoms comprised of circuit elements, are a leading platform to create quantum computers. Many schemes for superconducting qubit readout include a weakly coupled port as a capacitor in the feedline, which allows for directionality in the readout signal. However, this impedance mismatch creates problems with resonator linewidth variation, standing waves, and voltage nodes in the feedline, leading to challenges in scaling to larger frequency multiplexed systems. This thesis proposes an all-pass readout scheme that utilizes ring resonators that do not require a weakly coupled port, allowing for more modular qubit readout architectures.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159080</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verification of Go Channels</title>
<link>https://hdl.handle.net/1721.1/159079</link>
<description>Verification of Go Channels
Zhang, Jessica
Goose is a tool for translating a subset of the Go programming language into Perennial/Iris, which is an extension of Coq. However, Goose did not support channels, which are an important synchronization tool that Go is well known for.&#13;
&#13;
This thesis presents an extension to Goose to support channels, including a model to represent Go channels and operations in GooseLang, the language defined in Perennial/Iris that Goose translates into, an extension to the Goose translator to support channels, and a library of separation logic specifications that define the expected behavior of channel operations on open channels. Finally, this thesis evaluates how effective this model and library is for verifying Go code containing channels, and discuss some limitations and potential future work.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159079</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modulating the Electrochemistry of Calcium Metal Anodes</title>
<link>https://hdl.handle.net/1721.1/159078</link>
<description>Modulating the Electrochemistry of Calcium Metal Anodes
Melemed, Aaron M.
Rechargeable lithium ion (Li-ion) batteries have been a foundational energy storage technology. However, there are significant technoeconomic, geopolitical, and sustainability concerns regarding the procurement of Li-ion battery components, from transition metals within the cathode to lithium itself. As such, the development of battery chemistries beyond Li is vital to the long-term viability of electrochemical energy storage. Batteries based on calcium (Ca) metal anodes offer a compelling alternative; Ca is the fifth-most abundant element in the earth’s crust at 41,500 ppm (vs. 200 ppm for Li), offering potential improvements in scalability and sustainability. Ca metal also offers attractive electrochemical metrics, with a redox potential 0.17 V more positive than Li and a theoretical volumetric capacity of 2073 mAh/cm³ (vs. 850 mAh/cm³ for graphite and 2062 mAh/cm³ for Li metal). The field of Ca metal batteries is currently in its early stages, however, due to a limited number of electrolytes that can reversibly plate and strip Ca — a requirement for rechargeability. Two important challenges to overcome are (1) the formation of a passivating solid electrolyte interphase (SEI) between Ca and the electrolyte that inhibits Ca²⁺ transport to the anode, and (2) attractive Ca²⁺--anion interactions in the electrolyte that suppress ionic conductivity and hinder Ca electrochemistry. These limitations rendered Ca plating/stripping unattainable until a groundbreaking first demonstration in 2015. In the decade since, only a handful of reversible electrolytes have been reported, reflecting a severely constrained electrolyte design space. This thesis expands upon this design space through interfacial and electrolyte engineering, offering novel techniques to modulate Ca electrochemistry that provide new degrees of freedom for the development of Ca-based batteries.&#13;
&#13;
To begin, the practical assembly and cycling behavior of Ca foil electrodes are examined in a reversible electrolyte system for the first time. In contrast to historical work examining Ca foil in other common battery electrolytes, Ca foils are demonstrated to be electrochemically accessible for both plating and stripping in Ca(BH₄)₂ in tetrahydrofuran (THF). However, the first cyclic voltammetry (CV) cycle reflects persistent, history-dependent behavior from prior handling, which manifests as characteristic interface-derived features. Three exemplar SEI exhibit this interface-dominated behavior during initial CV cycles, though the interfacial features diminish with continued cycling. These results reveal that long-term cycling behavior is, to a greater extent, governed by the electrolyte, informing ensuing research into electrolyte composition and speciation. &#13;
&#13;
Competitive interactions between Ca²⁺, anions, and solvent molecules are next harnessed to modify the Ca²⁺ coordination environment in this baseline electrolyte. An exemplar dual-salt electrolyte with differing Ca²⁺--anion interaction strengths, Ca(BH₄)₂ + Ca(TFSI)₂ in THF, is systematically altered. Introduction of a more-dissociating source of Ca²⁺ via Ca(TFSI)₂ drives re-speciation of strongly ion-paired Ca(BH₄)₂, generating larger populations of charged species and enhancing Ca plating currents. A critical parameter is proposed to govern electroactivity, the BH4 /Ca²⁺ ratio. Parasitic TFSI- decomposition prevents Ca plating when the BH4-/Ca²⁺ ratio is less than one. However, Ca plating in a TFSI--containing electrolyte is demonstrated for the first time when the BH4-/Ca²⁺ ratio is greater than 1, as BH₄⁻ displaces strongly coordinating TFSI⁻ from the Ca²⁺ coordination environment. These results directly evidence the impact of coordination-shell chemistry on plating activity and indicate that Ca²⁺--BH₄⁻ interactions can unlock electroactivity in the presence of other Ca salts, significantly increasing the Ca electrolyte design space. &#13;
&#13;
Ca²⁺--solvent interactions are next examined as a subtler tool for electrochemical manipulation. The systematic introduction of glymes into the baseline electrolyte is shown to induce differential changes in Ca²⁺ coordination, as stronger glyme coordination displaces THF from the Ca²⁺ coordination environment, weakens Ca²⁺--BH₄⁻ interactions, and prompts BH₄⁻ redistribution. Examination of electrochemically-formed SEI indicates that BH₄⁻-facilitated solvent decomposition governs Ca electrochemistry in these systems, as coordinated THF promotes beneficial borate formation in the SEI but coordinated glymes instead favor the formation of Ca²⁺ blocking phases. The link between Ca²⁺ coordination strength and solvent decomposition is corroborated through the quantification of gaseous products. Altogether, these strategies for the modulation of Ca electrochemistry reveal new avenues for electrolyte engineering that will promote further development of Ca-based batteries.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159078</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nature of the union between benzidine colors and cellolose</title>
<link>https://hdl.handle.net/1721.1/159011</link>
<description>Nature of the union between benzidine colors and cellolose
Colins, William H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159011</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of the distribution of phosphorus and nitrogen in the products of modern milling</title>
<link>https://hdl.handle.net/1721.1/159010</link>
<description>Investigation of the distribution of phosphorus and nitrogen in the products of modern milling
Bragg, Lottie Almira.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890; Includes bibliographical references (leaves 1-10).
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159010</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of several methods of setting indigo vats</title>
<link>https://hdl.handle.net/1721.1/159009</link>
<description>An investigation of several methods of setting indigo vats
Bartlett, Spaulding.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159009</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oil of maize</title>
<link>https://hdl.handle.net/1721.1/159008</link>
<description>Oil of maize
Atwood, F. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890; Includes bibliographical references (leaf 1).
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159008</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design of a mining plant for a silver-lead mine</title>
<link>https://hdl.handle.net/1721.1/159007</link>
<description>The design of a mining plant for a silver-lead mine
Loo, Pang Chieh.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1916
</description>
<pubDate>Sat, 01 Jan 1916 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159007</guid>
<dc:date>1916-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory testing and design of a mill for the treatment of a gold ore from Porcupine, Ontario</title>
<link>https://hdl.handle.net/1721.1/159006</link>
<description>Laboratory testing and design of a mill for the treatment of a gold ore from Porcupine, Ontario
Loo, Pang Chieh.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1917
</description>
<pubDate>Mon, 01 Jan 1917 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159006</guid>
<dc:date>1917-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heat transfer and friction for heating and cooling of fluids in pipes</title>
<link>https://hdl.handle.net/1721.1/159005</link>
<description>Heat transfer and friction for heating and cooling of fluids in pipes
Keevil, Charles S.
            (Charles Samuel)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1930; Includes bibliographical references (leaves 135-136).
</description>
<pubDate>Wed, 01 Jan 1930 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159005</guid>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The classification of the indecomposable integral representations of the dihedral group of order 2p</title>
<link>https://hdl.handle.net/1721.1/159004</link>
<description>The classification of the indecomposable integral representations of the dihedral group of order 2p
Leahey, William Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1962; Vita.; Includes bibliographical references (leaf 84).
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159004</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A comparison of the outstanding securities of the Virginian and the Norfolk and Western Railway Companies</title>
<link>https://hdl.handle.net/1721.1/159003</link>
<description>A comparison of the outstanding securities of the Virginian and the Norfolk and Western Railway Companies
Zsembik, Thomas G.; Virtue, William D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1948; Bibliography: leaf 100.
</description>
<pubDate>Thu, 01 Jan 1948 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159003</guid>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual system analysis through use of finite field sine gratings.</title>
<link>https://hdl.handle.net/1721.1/159002</link>
<description>Visual system analysis through use of finite field sine gratings.
Magnuski, Henry Stanley.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Vita.; Bibliography: leaves 150-152.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159002</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of industry financing of a new jet transport for U.S. domestic airline service</title>
<link>https://hdl.handle.net/1721.1/159001</link>
<description>A study of industry financing of a new jet transport for U.S. domestic airline service
Evani, Sunder Rayma Murthy.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159001</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Review of intervention programs for pre-schoolers in Venezuela.</title>
<link>https://hdl.handle.net/1721.1/159000</link>
<description>Review of intervention programs for pre-schoolers in Venezuela.
Eskenasy, Sandra Patricia.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1978; Bibliography: leaf 147.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159000</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of silicon in commercial aluminum and the action of reagents upon the metal</title>
<link>https://hdl.handle.net/1721.1/158999</link>
<description>Determination of silicon in commercial aluminum and the action of reagents upon the metal
Du Pont, Pierre S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158999</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Multi-query Planning in Graphs of Convex Sets</title>
<link>https://hdl.handle.net/1721.1/158967</link>
<description>Fast Multi-query Planning in Graphs of Convex Sets
Morozov, Savva
Planning in Graphs of Convex Sets (GCS) is a recently developed optimization framework that seamlessly integrates discrete and continuous decision making. It naturally models and effectively solves a wide range of challenging planning problems in robotics, including collision-free motion planning, skill chaining, and control of hybrid systems. In this thesis, we study the multi-query extension of planning through GCS, motivated by scenarios where robots must operate swiftly within static environments. Our objective is to precompute optimal plans between predefined sets of source and target conditions, in an effort to enable fast online planning and reduce GCS solve times. Our solution consists of two stages. Offline, we use semidefinite programming to compute a coarse lower bound on the problem’s cost-to-go function. Then, online, this lower bound is used to incrementally generate feasible plans by solving short-horizon convex programs. We demonstrate the effectiveness of our approach through a variety of experimental domains: collision-free motion planning for a warehouse robot arm, item sorting for a top-down suction gripper, and footstep planning for a bipedal walker. In particular, in a warehouse-like scenario involving a seven-joint robot arm, our method generates higher-quality paths up to 100 times faster than existing motion planners.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158967</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structuring Representation Geometry in Self-Supervised Learning</title>
<link>https://hdl.handle.net/1721.1/158966</link>
<description>Structuring Representation Geometry in Self-Supervised Learning
Gupta, Sharut
The central promise of deep learning is to learn a map &#119891; : &#119987; → ℝ_&#119889; that transforms objects &#119987;—represented in their raw perceptual forms, such as images or molecular strings—into a representation space ℝ_&#119889; where everything that is hard to do with raw perceptual data becomes easy. For instance, measuring the similarity between two objects [scientific notation] expressed as tensors of pixel intensities is non-trivial in their raw form, but becomes straightforward if &#119891; maps these objects to a space where simple Euclidean distances, ‖&#119891;(&#119909;₁) − &#119891;(&#119909;₂)‖₂ are meaningful measures of similarity. While this simple recipe has shown standout success in a range of tasks, certain applications require representations that encode richer structural relationships beyond pairwise similarity. For instance, tasks that encode relational information— such as “&#119883; is a parent of &#119884; ” or “&#119860; is a treatment for &#119861;”—require embedding spaces that capture richer structural relationships. In this thesis, we explore what &#119891; should encode in order to be useful for a range of unknown downstream tasks, from the point of view of the geometric structure of representation space. We investigate this question in the context of self-supervised learning, a paradigm that extracts meaningful representations by leveraging the structure of the data itself without relying on explicit labels. Specifically, we propose adding additional geometric structure to the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations in the embedding space. To this end, we introduce an equivariance objective and theoretically prove that its minima forces transformations on input space to correspond to rotations on the spherical embedding space. Our proposed method significantly improves performance on downstream tasks, and ensures sensitivity in embedding space to important variations in data (e.g., color, rotation) that existing contrastive methods do not achieve.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158966</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable and Modular Manufacturing of Insect-Scale Aerial Robots Towards Swarm Flight Demonstrations</title>
<link>https://hdl.handle.net/1721.1/158965</link>
<description>Scalable and Modular Manufacturing of Insect-Scale Aerial Robots Towards Swarm Flight Demonstrations
Hsiao, Yi-Hsuan
Insects demonstrate remarkable capabilities in navigating complex environments and executing tasks such as pollination and coordinated object transport. Inspired by these biological feats, insect-scale micro aerial vehicles (MAVs) have been developed with advanced flight functionalities, including collision resilience and aerial acrobatics. Despite these advancements, MAVs weighing less than a gram continue to face critical challenges in design, assembly, and repair. Additionally, limitations in sensing and control have prevented the realization of swarm-like behaviors, thereby constraining research on collective actions and potential applications such as distributed sensing. To overcome these obstacles, this work introduces a scalable and modular fabrication method for sub-gram MAVs. A parametric design algorithm automatically generates laser cutting templates from a minimal set of design parameters, while stereolithographic 3D printing is employed to fabricate static components such as airframes and connectors, significantly streamlining the production process. This modular approach improves assembly efficiency and repairability, reducing fabrication time by more than half. Using this methodology, two sub-gram MAVs successfully demonstrated controlled hovering and coordinated payload transport. These results represent a significant step toward enabling insect-inspired robotic swarms, providing a platform for future studies on collective flight behaviors and swarm robotics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158965</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Data, to Models, and Back: Making Machine Learning Predictably Reliable</title>
<link>https://hdl.handle.net/1721.1/158964</link>
<description>From Data, to Models, and Back: Making Machine Learning Predictably Reliable
Ilyas, Andrew
Machine learning systems exhibit impressive performance, but we currently lack scalable ways to anticipate their successes, failure modes, and biases. This position limits our ability to deploy these systems in the appropriate contexts, and to build systems which we can confidently deploy in high-risk settings. Motivated by this state of affairs, this thesis aims to develop design principles for predictably reliable machine learning. Our ultimate goal is to enable developers to know when their models will work, anticipate when they will fail, and understand “why” in both cases. In pursuit of this goal, this thesis combines large-scale experiments with theoretical analysis to form a precise understanding of the ML “pipeline,” from training data (and the way we collect it), to learning algorithms, to deployment. Fully realized, such an understanding would allow us to build ML systems the same way we build buildings or airplanes—safely, scalably, and with a robust grasp of the underlying principles. In this thesis, we focus on four design choices within this pipeline: model deployment (Part I), dataset creation (Part II), data collection (Part III), and algorithm selection (Part IV). For each of these design choices, we use targeted experiments to uncover the corresponding principles that actually underlie the behavior of ML systems. We distill these principles into concise conceptual models which allow us to both reason about existing systems and design improved ones. Along the way, we will revisit, challenge, and refine various aspects of conventional wisdom surrounding ML model development.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158964</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>ΔB₀ Field Control in High Field MRI with Local Multcoil Shim Arrays</title>
<link>https://hdl.handle.net/1721.1/158963</link>
<description>ΔB₀ Field Control in High Field MRI with Local Multcoil Shim Arrays
Arango, Nicolas
Local multicoil ΔB₀ shim arrays enable low-cost, simple to fabricate, and physically small, static magnetic field control in magnetic resonance imaging. The presented thesis will show frameworks for coil current calculation for homogeneity and novel selective excitation applications. As MRI RF coils trend towards repositionable and flexible systems for their ease of use and tight-to-the-patient fit, ΔB₀ shim arrays are left behind for lack of rapid, patient-on-the-table calibration. We show an inverse problem approach with physics-based regularization and adaptation to accelerate calibration by over 50 fold. The numerical tools developed for calibration also proved useful for design to enable novel upper bounds on ΔB₀ shim performance and new tools for automatic application and anatomy-specific local multicoil array design.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158963</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-orthogonal multiple access using guessing random additive noise decoding aided macrosymbols</title>
<link>https://hdl.handle.net/1721.1/158962</link>
<description>Non-orthogonal multiple access using guessing random additive noise decoding aided macrosymbols
Yang, Kathleen
We propose guessing random additive noise decoding-aided macrosymbols (GRANDAM) as a nonorthogonal multiple access (NOMA) method that can detect, error correct, and decode multiple users in multiple input multiple output (MIMO) systems that involve imperfect channel estimation, symbol-wise asynchronous transmission, and interference. GRAND-AM is a NOMA method that uses both joint multiuser detection and joint error correction decoding to handle multiple access interference (MAI) from the users of interest. Our method avoids codebook design and iterative decoding techniques, which are associated with other commonly researched NOMA techniques. We introduce the concept of a macrosymbol, which is constructed from the combination of all user symbols, for the joint detection component of GRANDAM. For the error correction decoding component, we introduce multiple access channel (MAC) codes, which are codes that are used to split the channel rate between users and correct errors due to the MAI. Each user has their information bits encoded with independent MAC codes, which can be short, low rate linear codes such as cyclic redundancy check (CRC) codes or space time codes such as the Alamouti code. We use a soft detection variant of GRAND, a near maximum likelihood (ML) universal decoding algorithm that inverts noise effect sequences from a sequence of symbols to arrive at a codeword, to correct the received sequence of macrosymbols, and ensure that all user codebooks are simultaneously satisfied in the joint decoding process. We show that the methodology of using joint detection and joint decoding at the receiver leads to lower error rates compared to an individual detection and decoding technique, and has comparable performance to an orthogonal multiple access (OMA) system with a similar code rate and length.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158962</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-Technology Co-Optimization of Scaled Electronics&#13;
Based on Two-Dimensional Materials</title>
<link>https://hdl.handle.net/1721.1/158961</link>
<description>System-Technology Co-Optimization of Scaled Electronics&#13;
Based on Two-Dimensional Materials
Zhu, Jiadi
Over the past 60 years, the semiconductor industry has focused on developing highly scaled electronic devices and high-density integrated circuits. However, bottlenecks have arisen recently as transistor dimensions approach the physical limits, and integration density is constrained. This thesis addresses these issues with two-dimensional (2D) materials, which includes inventing a low-temperature (&lt; 300 °C) metal-organic chemical vapor deposition (MOCVD) method for 2D materials on 8-inch wafers, investigating extreme device scaling and multi-channel transistors. Design-Technology Co-Optimization (DTCO) and SystemTechnology Co-Optimization (STCO) are employed to rapidly model, evaluate and optimize device and circuit performance. Moreover, heterogeneous integration and monolithic 3D integration techniques are investigated, addressing challenges in integrating 2D materials with silicon complementary-metal-oxide-semiconductor (CMOS) circuits and flexible substrates. This research aims to advance high-density, high-performance electronics with low-power consumption for next-generation integrated systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158961</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Goal Inference from Open-Ended Dialog</title>
<link>https://hdl.handle.net/1721.1/158960</link>
<description>Goal Inference from Open-Ended Dialog
Ma, Rachel
Embodied AI Agents are quickly becoming important and common tools in society. These embodied agents should be able to learn about and accomplish a wide range of user goals and preferences efficiently and robustly. Large Language Models (LLMs) are often used as they allow for opportunities for rich and open-ended dialog type interaction between the human and agent to accomplish tasks according to human preferences.&#13;
&#13;
In this thesis, we argue that for embodied agents that deal with open-ended dialog during task assistance:&#13;
&#13;
1. AI Agents should extract goals from conversations in the form of Natural Language (NL) to be better at capturing human preferences as it is intuitive for humans to communicate their preferences on tasks to agents through natural language.&#13;
&#13;
2. AI Agents should quantify/maintain uncertainty about these goals to ensure that actions are being taken according to goals that the agent is extremely certain about.&#13;
&#13;
We present an online method for embodied agents to learn and accomplish diverse user goals. While offline methods like RLHF can represent various goals but require large datasets, our approach achieves similar flexibility with online efficiency. We extract natural language goal representations from conversations with Large Language Models (LLMs). We prompt an LLM to role play as a human with different goals and use the corresponding likelihoods to run Bayesian inference over potential goals. As a result, our method can represent uncertainty over complex goals based on unrestricted dialog. We evaluate in a text-based grocery shopping domain and an AI2Thor robot simulation. We compare our method to ablation baselines that lack either explicit goal representation or probabilistic inference.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158960</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Subject Image Generation</title>
<link>https://hdl.handle.net/1721.1/158959</link>
<description>Multi-Subject Image Generation
Yin, Tianwei
Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend identity among subjects. In this thesis, we present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multi-subject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300–2500 speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. FastComposer paves the way for efficient, personalized, and high-quality multi-subject image creation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158959</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic algorithm gradient ascent (GAGA) optimization&#13;
of compact symmetry-breaking photonic crystals</title>
<link>https://hdl.handle.net/1721.1/158958</link>
<description>Genetic algorithm gradient ascent (GAGA) optimization&#13;
of compact symmetry-breaking photonic crystals
Gold, Hannah T.
Fundamental limits of thermal radiation are imposed by Kirchhoff’s law, which assumes the electromagnetic reciprocity of a material or material system. Thus, breaking reciprocity can enable breaking barriers in thermal efficiency engineering¹. This thesis presents 1D photonic crystals composed of Weyl/Dirac semimetal and dielectric layers, whose structures are optimized to maximize the nonreciprocity of infrared radiation absorptance/emittance in planar and compact designs. Two different mechanisms to enable nonreciprocal infrared absorbers/emitters are simulated and compared – anomalous Hall effect in Weyl semimetals 2 and electric-current-induced Fizeau drag in either Dirac or Weyl semimetals3 . To engineer an ultra-compact absorber structure that does not require gratings or prisms to couple light, a genetic algorithm (GA) was used to maximize nonreciprocity in the design globally, followed by the application of the numerical gradient ascent (GAGA) algorithm as a local optimization to further enhance the design. The first absorber design takes advantage of the intrinsic nonreciprocity of time-reversal symmetry (TRS) breaking Weyl semimetals due to their pseudomagnetic field in momentum space. GAGA methodology is then applied to design and optimize a flat absorber using inversion (IS) breaking Weyl/Dirac semimetals as active layers, in which tunable nonreciprocity is induced through an applied DC current bias. This momentum bias imparts plasmon Fizeau drag, the drag of an electrical current on propagating surface plasmon polaritons (SPPs). A semi-classical theory recently developed is used to model SPP transport along interfaces of 3D semimetals under Fizeau drag3 . Lastly, in both cases the optimization algorithm accounts for both s- and p-polarized absorptance spectra to create a final design suitable for thermal applications, which maximizes the nonreciprocal absorptance of p-polarized light and simultaneously minimizes the parasitic, reciprocal absorptance of s-polarized light.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158958</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven General Purpose Foundation Models for&#13;
Computational Pathology</title>
<link>https://hdl.handle.net/1721.1/158957</link>
<description>Data-Driven General Purpose Foundation Models for&#13;
Computational Pathology
Lu, Ming Yang (Max)
The field of computational pathology has undergone a remarkable transformation in recent years. Researchers have leveraged supervised and weakly-supervised deep learning with varying degrees of success to address a wide range of tasks, including cancer subtyping and grading, metastasis detection, survival and treatment response prediction, tumor site-of-origin identification, mutation prediction, biomarker screening, and more. However, traditional task-specific models often require extensive labeled data and struggle to generalize across diverse pathology tasks. This limitation motivates the exploration of foundation models, which promise a more scalable, versatile solution by learning broad representations that can be adapted to various downstream applications. In this thesis, I will investigate the capabilities and limitations of data-driven foundation models in computational pathology. Specifically, I will explore two frameworks for developing general-purpose encoder models for pathology images: one using paired image-text data, and another leveraging self-supervised learning on large-scale unlabeled images. Additionally, I will examine downstream applications of these foundation models, including zero-shot transfer to gigapixel whole slide images and the development of an interactive multimodal AI assistant for pathologists.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158957</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailored Mechanical Response of 3D Microgranular Crystals with Hierarchical Architecture</title>
<link>https://hdl.handle.net/1721.1/158956</link>
<description>Tailored Mechanical Response of 3D Microgranular Crystals with Hierarchical Architecture
Figueroa, Samuel D.
Granular media exhibit extraordinary impact-mitigating properties due to their nonlinear grain-to-grain interactions, enabling efficient energy dissipation and wave perturbation under dynamic loading—behaviors unattainable in conventional monolithic materials. Recent efforts have sought to engineer granular systems with tunable mechanical responses, though few have begun to realize them as functional architected materials. Here, we introduce a two-level architected granular framework that programs spherical microgranular media across both grain-level (ellipsoidal microvoids) and bulk granular packing-level architectures, offering surprising control over static and dynamic properties. Using nanoindentation experiments, we reveal tunable quasi-static stiffness behavior, where hollow architected granular packings can exhibit superior mass-normalized energy dissipation compared to their fully dense counterparts. Finite element simulations uncover a structurally engineered Poisson effect, enabling nonlocal contact mechanisms that enhance load-bearing capacity across different packing structures. Future custom direct impact experiments demonstrate a potential route the effectiveness of our multi-scale design in dynamically programming energy dissipation. Our findings demonstrate that a hierarchical granular crystal exhibits enhanced specific energy absorption at a fraction of the weight of their fully dense counterparts and unique nonlocal stress redistribution, surpassing classical granular mechanics through architectural design. This work establishes a path toward lightweight, tunable, and impact-resistant metamaterials, with broad applications in nonlinear waveguiding, energy dissipation, and protective systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158956</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics and Optical Properties of Lead Halide&#13;
Perovskite Nanocrystals: From Nanorods to Nanocubes</title>
<link>https://hdl.handle.net/1721.1/158955</link>
<description>Exciton Dynamics and Optical Properties of Lead Halide&#13;
Perovskite Nanocrystals: From Nanorods to Nanocubes
Šverko, Tara
Lead halide perovskites, particularly CsPbBr3, have emerged as leading light emitters for their spectral purity, brightness, and facile synthesis. Their soft, ionic lattice makes them unusually defect tolerant but introduces problems with stability. Additionally, dephasing mechanisms and coupling to phonons are not yet well understood in these semiconductors. &#13;
In the first part of the thesis, I investigate highly confined, anisotropic CsPbBr3 nanorods, elucidating the photophysics governing their broad single-particle linewidths. I utilize ensemble and single particle photoluminescence techniques across a wide temperature range in order to pinpoint exciton-phonon coupling mechanisms, structural and surface effects, and spin mixing in these novel materials.&#13;
In the second part of the thesis, I focus on the opposite size regime, where collective behaviour dominates the optical properties. I develop a novel spectroscopy to pinpoint dephasing mechanisms that could reduce superradiant and coherent emission in order to promote rational design and future integration of these nanocrystals into quantum information devices.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158955</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Variational Lower Bound to Mitigate Batch Effect in&#13;
Molecular Representations</title>
<link>https://hdl.handle.net/1721.1/158954</link>
<description>A Variational Lower Bound to Mitigate Batch Effect in&#13;
Molecular Representations
Wang, Chenyu
High-throughput drug screening – using cell imaging or gene expression measurements as readouts of drug effect – is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug. Since large-scale screens have to be divided into multiple experiments, a key difficulty is dealing with batch effects, which can introduce systematic errors and non-biological associations in the data. We propose InfoCORE, an Information maximization approach for COnfounder REmoval, to effectively deal with batch effects and obtain refined molecular representations. InfoCORE establishes a variational lower bound on the conditional mutual information of the latent representations given a batch identifier. Experiments on drug screening data reveal InfoCORE’s superior performance in a multitude of tasks including molecular property prediction and molecule-phenotype retrieval. Additionally, we show results for how InfoCORE offers a versatile framework and resolves general distribution shifts and issues of data fairness by minimizing correlation with spurious features or removing sensitive attributes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158954</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems for Usable Machine Learning</title>
<link>https://hdl.handle.net/1721.1/158953</link>
<description>Systems for Usable Machine Learning
Zytek, Alexandra
Many real-world decision problems are complex, with outcomes difficult to measure and evaluate. The impact of decisions made in these domains is nuanced and takes a long time to be fully realized. Individual mistakes can lead to significant costs, and computational tools such as ML models must be integrated alongside existing, well-established human workflows. These properties of such decision problems means that ML solutions must be usable in order to be effective — in other words, developed and deployed in such a way as to be used by humans in decision-making and improve outcomes. In order improve ML usability, developers create ML tools, or diverse kinds of interfaces that allow users to understand ML models and their predictions. In this thesis, we use real-world case studies to synthesize generalizable lessons for applying usable ML tools to complex, real-world decision problems. Based on experience developing ML tools for child welfare screening, we propose a formal taxonomy of feature properties related to usability and interpretability. We then discuss the design and development of a system to make generating ML explanations that use such interpretable features more effective. Pyreal is a framework and Python library implementation that uses updated data transformers to generate explanations of ML models and predictions using interpretable features. Motivated by the development and customization effort required to develop ML tools for new applications, we then discuss the development of Sibyl, a configurable and comprehensive system for generating usable ML interfaces for a wide range of applications. We then discuss our case study in applying Sibyl to the decision problem of wind turbine monitoring. We then discuss Explingo, our system for transforming traditional ML explanations into natural language narratives to further improve the usability of ML outputs. We finish by discussing the practical lessons this work demonstrates related to the need for usable ML, the challenges specific to these complex applications, ethical questions, and future directions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158953</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Foundations for Pragmatic Data Science</title>
<link>https://hdl.handle.net/1721.1/158952</link>
<description>Causal Foundations for Pragmatic Data Science
Squires, Chandler
A key goal of scientific discovery is the acquisition of knowledge that is practically useful for societal endeavors, such as the development of medicine or the design of fruitful economic policies. In this thesis, I place front and center the role that scientific models play in the process of decision-making, emphasizing the importance of causal models in science, i.e., models which describe the possible effects of actions upon a system. The work contained explores central topics in this domain, including causal discovery (learning causal models from data), causal representation learning (learning how to coarse-grain observations into causally sensible “macro-variables”), and end-to-end causal inference (the interplay between causal discovery and downstream decision-making).
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158952</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Foundational End-to-End Verification of Systems Stacks</title>
<link>https://hdl.handle.net/1721.1/158951</link>
<description>Techniques for Foundational End-to-End Verification of Systems Stacks
Gruetter, Samuel
Today's software is full of bugs and vulnerabilities. Formal verification provides a promising remedy through mathematical specifications and machine-checked proofs that the implementations conform to the specifications. However, there could still be bugs in the specifications or in the verification tools, which could lead to missed bugs in the software being verified. Therefore, this dissertation advocates for foundational end-to-end verification, a proof-based software development method that can mitigate both of these concerns:&#13;
&#13;
It is end-to-end in the sense that the correctness proofs of individual components are used to discharge the assumptions of adjacent components throughout the whole stack, resulting in end-to-end theorems that only mention the top-most and bottom-most specifications, so that bugs in intermediate specifications cannot invalidate the soundness of the end-to-end statement anymore.&#13;
&#13;
The method is foundational in the sense that the soundness of the proofs relies only on the foundations of mathematics and on the correctness of a small proof-checking kernel, but not on the correctness of other, domain-specific verification tools, because these tools are either proven correct once-and-for-all, or they output proofs that are checked by the kernel.&#13;
&#13;
Ensuring that all the reasoning can be checked by the same small foundational kernel requires considerable effort, and the first part of this dissertation presents techniques to reduce this effort:&#13;
&#13;
Omnisemantics, a new style of semantics that can be used instead of traditional small-step or big-step operational semantics, offer a smooth way of combining undefined behavior and nondeterminism, and enable forward-simulation compiler correctness proofs with nondeterministic languages, whereas previous approaches need to fall back to the much less convenient backward simulations if support for nondeterminism is needed.&#13;
&#13;
Live Verification is proposed, a technique to turn an interactive proof assistant into a programming assistant that displays the symbolic state of the program as the user writes it and allows the user to tweak the symbolic state as long as the tweaks are provably sound. An additional convenience-improving feature is that instead of stating lengthy loop invariants, the user only needs to give the diff between the symbolic state before the loop and the desired loop invariant, resulting in shorter and more maintainable annotations. Finally, in order to make Live Verification practical, a number of additional proof techniques is presented.&#13;
&#13;
The second part of the dissertation shows how these techniques were useful in three collaborative case studies: An embedded system running on a verified processor with an end-to-end proof where the software-hardware interface specification cancels out, a cryptographic server with an end-to-end proof going from high-level elliptic-curve math all the way down to machine code, and a trap handler to catch unsupported-instruction exceptions whose correctness proof combines program-logic proofs about C-level functions, a compiler correctness proof, and proofs about hand-written assembly.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158951</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Congestion Control for DNN training clusters</title>
<link>https://hdl.handle.net/1721.1/158950</link>
<description>Congestion Control for DNN training clusters
Narang, Sanjoli
The modern DNN workloads generate network traffic having striking differences with the conventional data-center traffic. DNN training jobs generate periodic traffic pattern where all subsequent flows depend on the completion of the currently running flow. Although this periodic behavior calls for a new non-conventional congestion control protocol for DNN training clusters, it also creates an unprecedented opportunity to approximate optimal schedule for DNN jobs in a distributed manner without requiring priority queues, centralized information, or switch hardware support. Prior work on MLTCP proposed updates to existing congestion control algorithms to make them capable of minimizing network congestion when DNN jobs compete for the network. In this thesis, we propose several techniques to expand the scope of prior work to support DNN jobs with more complex communication patterns or parallelization strategies, and further improve the performance speedup over TCP. With two straightforward ideas of updating the congestion control parameters, we expand the performance benefits of MLTCP to a wider set of periodic DNN jobs. Augmenting existing congestion control algorithms with MLTCP provides an effective guiding mechanism to a random search to find the optimal interleaved schedule for competing DNN jobs. Our contributions boost this guided search to improve performance further. We provide detailed theoretical analysis and extensive flow-level simulations to take a deep dive into the convergence, performance speedup, and fairness of MLTCP with the proposed changes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158950</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Input Adaptive Allocation of Language Model Computation</title>
<link>https://hdl.handle.net/1721.1/158949</link>
<description>Input Adaptive Allocation of Language Model Computation
Damani, Mehul
Computationally intensive decoding procedures—including search, reranking, and self-critique— can improve the quality of language model (LM) outputs in problems spanning code generation, numerical reasoning, and dialog. Existing work typically applies the same decoding procedure for every input to an LM. But not all inputs require the same amount of computation to process. Can we allocate decoding computation adaptively, using more resources to answer questions whose answers will be harder to compute? We present an approach that predicts the distribution of rewards given an input and computation budget, then allocates additional computation to inputs for which it is predicted to be most useful. We apply this approach in two decoding procedures: first, an adaptive best-of-k procedure that dynamically selects the number of samples to generate as input to a reranker; second, a routing procedure that dynamically responds to a query using a decoding procedure that is expensive but accurate, or one that is cheaper but less capable. Across a suite of programming, mathematics, and dialog tasks, we show that accurate computation-allocation procedures can be learned, and reduce computation by up to 50% at no cost to response quality, or improve quality by up to 10% at a fixed computational budget.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158949</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tackling Algorithmic Problems on Massive Graphs</title>
<link>https://hdl.handle.net/1721.1/158948</link>
<description>Tackling Algorithmic Problems on Massive Graphs
Biswas, Amartya Shankha
As datasets grow increasingly larger, traditional computational models, which require reading the entire input, become impractical due to constraints on time, memory, and randomness. This thesis explores alternative algorithmic approaches for processing massive graphs under these constraints. Specifically, we focus on algorithms for the following graph problems. Motif Counting and Sampling: This involves developing efficient algorithms for counting and sampling small motifs (constant sized subgraphs) like stars and triangles, which are crucial for applications in biology, chemistry, and social networks. The thesis presents improved methods for both approximate and exact counting and sampling of general motifs. Graph Sparsification and Spanners: The problem of sparsifying graphs involves removing (usually most) edges of the input graph in a way that preserves essential properties such as connectivity and approximate distances. This thesis introduces algorithms for constructing sparse spanning graphs, as well spanners – sparse subgraphs that approximate distances up to a multiplicative factor. We obtain faster algorithms in parallel settings, and also initiate the study of average case graph inputs in the sublinear setting, and obtain results beyond the worst case lower bounds We investigate both of these problems in different models, including sublinear query access, local computation algorithms (LCAs), and the MPC model, and also discuss implications of these in distributed and parallel models of computation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158948</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Lewis Acidic Pnictenium Ions Using Carbone and Capping Arene Ligands for Bond Functionalization</title>
<link>https://hdl.handle.net/1721.1/158947</link>
<description>Design of Lewis Acidic Pnictenium Ions Using Carbone and Capping Arene Ligands for Bond Functionalization
Warring, Levi
Interest in the chemistry of antimony and bismuth is rapidly growing due to isolation of low coordinate, subvalent or Lewis acidic compounds that can mediate reactivity traditionally reserved for their d block counterparts. Ligand strategies play a key role in the isolation of such species. Anionic ligands with large steric profiles, as well as carbenes, have been widely implemented to stabilize subvalent heavy group 15 element compounds. However, synthetic strategies to prepare Lewis acidic antimony and bismuth complexes remain underexplored. Cationization is one of the most common methods used to enhance the Lewis acidity of heavy group 15 elements by creating a vacant p orbital on the pnictogen atom. Lewis acids are also employed in frustrated Lewis pair (FLP) chemistry to enable intra- and intermolecular reactivity. Carbone ligands, which are neutral, 4 electron donor ligands, offer a unique ability to support highly electrophilic main-group elements. This dissertation investigates the stabilization of heavy pnictenium ions using neutral donor ligands, such as carbodicarbenes and capping arene ligands, and explores their potential in Lewis acid-mediated chemistry. In Chapter Two, the synthesis and characterization of a series carbodicarbene-pnictenium ions is described. The utilization of strongly donating carbodicarbene ligands enables the isolation of mono-, di- and tri-cationic antimony and bismuth cations. These ions have multiple bond character between carbon and antimony/bismuth, representing some of the first examples of stibaalkene and bismaalkene cationic compounds. The Lewis acidity of these ions was assessed using the Gutmann-Beckett method and computationally derived fluoride ion affinities, the latter of which indicates Lewis superacidity for the bis(pyridyl)carbodicarbene-pnictenium trications. In Chapter Three, the reactivity of the bis(pyridyl)-carbodicarbene stibenium trication toward C(sp³)–H and C(sp)–H bonds is demonstrated. The Lewis superacidic antimony cation mimics the chemistry of frustrated Lewis pairs in the presence of the sterically encumbered base 2,6-di-tert-butylpyridine to enable C–H bond breaking of acetonitrile and a set of terminal alkynes. Kinetic analyses, in conjunction with density functional theory, support a mechanism by which acetonitrile coordinates to antimony, acidifying the C–H bonds, which can be subsequently deprotonated by the base in solution. The resulting stiba-methylene nitrile and stiba-alkynyl adducts undergo reactivity with elemental iodine to generate iodoacetonitrile and 1-iodoalkynes while reforming a stibenium trication. In Chapter Four, capping arene ligands are coordinated to antimony and bismuth tribromide to afford a series of κ²-bound complexes. Bromide abstraction from these neutral adducts affords ionic compounds. Both the neutral and ionic species have distinctive Menschutkin interactions, whereby the lone pair on the pnictogen atom is oriented toward the π system of the pendant arene. Shortening of the distances between the pyridyl nitrogen atoms and pnictogen atom are observed upon cationization from the neutral adducts. The Lewis acidity of these complexes was assessed using the Gutmann-Beckett method. Notably, acceptor numbers as high as 111 are observed for these ions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158947</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Structure for Efficient and Dexterous Contact-Rich Manipulation</title>
<link>https://hdl.handle.net/1721.1/158946</link>
<description>Leveraging Structure for Efficient and Dexterous Contact-Rich Manipulation
Suh, Hyung Ju Terry
Contact-rich manipulation has proved challenging due to the need to consider multiple combinatoric possibilities of making or breaking contact with the surrounding environment. As a result, existing methods have often resorted to combinatorial optimization that utilizes dynamics structure but considers all possibilities exhaustively, or compute-heavy and inefficient sampling methods that utilize blackbox optimization such as Reinforcement Learning (RL). In this thesis, I aim to show that by combining structured contact smoothing in conjunction with local gradient-based control and sampling-based motion planning, we can bypass the combinatorial explosion of contact modes while still leveraging structure and achieve highly efficient contact-rich manipulation. To achieve this capability, I first shed light on how RL abstracts contact modes and optimizes difficult landscapes by combining stochastic smoothing and zeroth-order optimization; yet, I show how following a similar stochastic strategy while using gradients suffers from several drawbacks such as empirical bias and high variance. To leverage structure in a more helpful manner, I propose a method for smoothing contact dynamics without relying on stochastic smoothing, bypassing these drawbacks. Using this smoothing scheme, I present a highly efficient and performant local control based on gradient-based trajectory optimization and model predictive control. Finally, I connect these local control capabilities with global sampling-based motion planners to achieve long-horizon global plans. The proposed method achieves contact-rich plans such as dexterous in-hand reorientation and whole-body manipulation much more efficiently than RL while being highly scalable compared to methods that explicitly reason about contact modes. These results achieve a reduction of contact-rich manipulation to kinodynamic motion planning, and exposes the true difficulty of contact-rich manipulation from combinatorial explosion in contact modes to combinatorial and highly non-local decisions over motion planning behaviors.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158946</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quality-Centric Single-Image Procedural Material Generation</title>
<link>https://hdl.handle.net/1721.1/158945</link>
<description>Quality-Centric Single-Image Procedural Material Generation
Li, Beichen
Procedural materials, represented as functional node graphs, are ubiquitous in computer graphics for photorealistic material appearance design. They allow users to perform intuitive and precise editing to achieve desired visual appearances. However, even for experienced artists, creating a procedural material given an input image requires professional knowledge and significant effort. Current inverse procedural material modeling approaches enable the automatic generation of procedural materials from input images. However, the visual quality of the generated materials is fundamentally limited by insufficient high-quality training data from industry-standard procedural materials, reliance on token-space supervision without visual feedback, and the absence of approximation-free node parameter post-optimization. My thesis presents advanced dataset augmentation, model training, and parameter post-optimization algorithms to address these challenges, significantly improving the perceptual match between the generated procedural material and the input image. Furthermore, the methodologies can be applied to other inverse procedural graphics problems to expedite similar artistic creation processes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158945</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Additively Manufactured Quadrupole Mass Filters for Low-Cost and High-Performance Applications</title>
<link>https://hdl.handle.net/1721.1/158944</link>
<description>Development of Additively Manufactured Quadrupole Mass Filters for Low-Cost and High-Performance Applications
Eckhoff, Colin C.
With a growing need for more compact and affordable mass spectrometers, many efforts have been made to miniaturize quadrupole mass filters (QMFs). Unfortunately, these efforts have yielded devices with inadequate performance for practical applications in analytical chemistry. This study reports the successful creation of a low-cost, high-performance QMF by means of additive manufacturing. Vat photopolymerization of glass-ceramic feedstock was used to create a novel, monolithic structure, and selective electroless nickel-boron plating metallizes the structure, forming a completed QMF that is lightweight and inexpensive to produce (20 USD per device). Furthermore, additive manufacturing allows QMF dimensions to be rapidly scaled to the optimal sizes for a given application, which is larger than most prior affordable quadrupole designs. Despite the limited precision of additive manufacturing, optimization techniques can be leveraged to produce high-quality devices with smooth surfaces. As a result, our QMFs achieved mass resolutions up to 164 at 69 Da, with abundance sensitivities sufficient to detect carbon-13 isotopes at lower masses—a level of performance comparable to commercial devices. These results indicate that additive manufacturing, properly employed, can significantly advance the state of the art of QMFs and other mass spectrometry technologies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158944</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast methods for full-wave electromagnetic solvers in MRI</title>
<link>https://hdl.handle.net/1721.1/158943</link>
<description>Fast methods for full-wave electromagnetic solvers in MRI
Guryev, Georgy D.
High static field ( 3T) MR scanners can produce human tissue images of astounding clarity, but rely on high frequency ( 123MHz) electromagnetic radiation that generates complex in-tissue field patterns that are patient-specific and potentially harmful. Many such scanners use multiple transmitters to better control field patterns, but then adjust the transmitters based on general guidelines rather than optimizing for the specific patient, mostly because computing patient-specific fields was presumed far too slow. It was recently demonstrated that the combination of fast low-resolution tissue mapping and fast voxel-based field simulation can be used to perform a rapid patient-specific MR safety check. However, the field simulation still required several minutes, making it too slow to perform the dozens of simulations that would be needed for patient-specific optimization. In this work, we develop a set of numerical acceleration techniques that facilitate fast field simulations that bridge the gap between the performance of current state-of-art full-wave electromagnetic packages and time requirements dictated by real-time patient-specific field optimization in a clinical setting. These techniques cater to a large range of body sizes and complex coil geometries.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158943</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Shock Waves on a Chip: Generation and Applications</title>
<link>https://hdl.handle.net/1721.1/158942</link>
<description>Weak Shock Waves on a Chip: Generation and Applications
Deschamps, Jude
In conventional laser-shock experiments in solid media, shock waves are typically excited from the ablation of a photoacoustic transducer layer deposited onto the sample of interest. Unavoidably, the target materials are damaged. This leads to the necessity of changing targets after each exposure, likely lowering the shot-to-shot reproducibility and data quality, while lowering the throughput of the experiment. Motivated by the need to generate large-amplitude transient strain waves at a high repetition rate, this thesis introduces a novel platform for the non-destructive generation and amplification of acoustic waves with associated strain levels in the percent range — up to the formation of shock waves. The acoustic amplification scheme is first described. Then, owing to the capabilities of the technique to repeatedly load a material with finite-amplitude strain waves, a demonstration of the use of the platform for microscale fatigue testing is made. Finally, the strain localization of surface acoustic waves is leveraged by transiently modulating a monolayer of a transition metal dichalcogenide deposited on a substrate.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158942</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Robotic Manipulation of Liquid Using a Digitally Fabricated Intelligent Wearable Device</title>
<link>https://hdl.handle.net/1721.1/158941</link>
<description>Enhancing Robotic Manipulation of Liquid Using a Digitally Fabricated Intelligent Wearable Device
Lee, Young Joong
Despite recent exponential advances in computer vision and reinforcement learning, it remains challenging for robots to interact with liquids due to visual obstructions, transparent liquids, and fine-grained splashes. Yet, a substantial opportunity exists for robotics to excel in liquid identification and manipulation, given its potential role in chemical handling in laboratories and various manufacturing sectors such as pharmaceuticals or beverages. Recent advancements in electronic wearables, designed to replicate or surpass the functions and attributes of human skin, and their convergence with machine learning have provided opportunities to enhance the capabilities of robotic systems. Here, we present a novel approach for liquid class identification and position estimation with the robotic wearable device that can ‘see through’ the container, leveraging electrical impedance sensing. We design and mount a digitally embroidered electrode array to a commercial robotic gripper. Coupled with a customized impedance sensing board, we collect data on liquid manipulation with a swept frequency sensing mode and a frequency-specific impedance measuring mode. Our developed learning-based models achieve an accuracy of 93.33% in classifying 9 different types of liquids (8 liquids + air) and 97.65% in estimating the liquid position in the cup without any vision system present. We investigate the effectiveness of our system with a series of ablation studies. These findings highlight our work as a promising solution for enhancing robotic manipulation in liquid-related tasks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158941</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Optimization of Tunneling Nanoelectromechanical Switches</title>
<link>https://hdl.handle.net/1721.1/158940</link>
<description>Design and Optimization of Tunneling Nanoelectromechanical Switches
Dang, Tong
As silicon complementary metal-oxide-semiconductor (CMOS) technology nears its scaling limits, nanoelectromechanical (NEM) switch relays have emerged as promising candidates for complementing CMOS technology due to their superior characteristics, including zero leakage, steep subthreshold swings, high on-of current ratios, and robustness in harsh environments. However, the practical integration of NEM switches still faces challenges such as high actuation voltages, stiction, and slower switching speeds compared to CMOS. One promising strategy to mitigate these issues is the integration of a self-assembled monolayer (SAM) to create tunneling NEM switches. Such switches could achieve nanometer-scale mechanical modulation of gaps between electrodes, showing the potential to overcome the limitations of a conventional NEM switch by exhibiting low actuation voltages, high switching speeds, and minimizing stiction. Nevertheless, the tunneling NEM switches reported to date still show limited performance and require intricate fabrication processes. Additionally, functional tunneling NEM switches demonstrated are limited to two-terminal architectures. This thesis explores innovative designs, fabrication techniques, and material choices to address these limitations and to develop tunneling NEM switches with enhanced performance and reliability for next-generation NEM logic applications. To this end, switches with various structures have been fabricated and investigated, and their respective characteristics are analyzed. In a three-terminal lateral structure fabricated using entirely conventional nanofabrication techniques, switching is demonstrated in both contact and tunneling modes. While operation in direct contact mode shows a high on-of ratio, the integration of the SAM leads to a significantly reduced actuation voltage of 2 V and a lower hysteresis. Further, two-terminal vertical structured devices are studied in tunneling mode, and they consistently demonstrate operation cycles exceeding 100, with a maximum of over 7000, which manifests the reliability prospects of SAM. The trends in IV characteristics indicate that the SAM might have experienced physical deformation due to compression, highlighting a potential area for future research in the molecular engineering of the self-assembly monolayer.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158940</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Structures for Scalable Vertical Gallium Nitride Power Devices</title>
<link>https://hdl.handle.net/1721.1/158939</link>
<description>Novel Structures for Scalable Vertical Gallium Nitride Power Devices
Perozek, Joshua Andrew
Solid state electronic devices have been the backbone of modern power systems for decades. However, as we enter an era fuelled by renewable energy and defined by pervasive electrification, novel power devices must be developed to address the increasingly stringent demands for high power density and efficiency. In this thesis, the theory and fabrication of several new gallium nitride (GaN) power devices will be developed to push beyond current device limitations.&#13;
&#13;
A key advancement surrounds the acknowledgment that vertical GaN power devices are fundamentally three-dimensional. Fabrication of these devices does not readily benefit from the decades of expertise gained in planar processing within the silicon industry. Instead, we will present how a new approach to creating vertical fin-based devices will enable self-aligned fabrication of vertical GaN finFETs and related devices. &#13;
&#13;
Within this work, we also explore the scalability of vertical GaN finFETs. Working with 8-inch GaN substrates, we demonstrate that vertical finFETs can be fabricated using a fully CMOS compatible process flow. This enables a scalable pathway to the widespread adoption of GaN by leveraging existing manufacturing capabilities.&#13;
&#13;
As a final look towards the future of GaN devices, we explore methods to surpassing the one-dimensional, unipolar limit of GaN through devices known as superjunction. The theory that has been highly successful for Si devices is applied to GaN, and a new framework for designing devices is presented. Using our approach to creating vertical fin-based devices, we are able to fabricate record high aspect-ratio demonstrations of a new class of fin diodes that reveal a promising path towards the next generation of GaN power devices.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158939</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Generalizable Systems by Learning Composable Energy Landscapes</title>
<link>https://hdl.handle.net/1721.1/158938</link>
<description>Learning Generalizable Systems by Learning Composable Energy Landscapes
Du, Yilun
How can we construct intelligent embodied agents in the physical world? Such agents should be able to autonomously solve tasks that have not been seen before, subject to external disturbances in the environment, as well as new combinations of factors such as lighting, varying sensor inputs, and unexpected interactions with agents and other objects. An important subgoal towards constructing such intelligent agents is to construct models that can robustly generalize, not only to distributions of tasks similar to ones seen at training time but also to new unseen distributions. This departs from standard machine learning techniques which usually assume identical training and test distributions. Towards this goal, in this dissertation, we’ll illustrate how we can achieve certain forms of generalization by estimating energy landscapes over possible predictions for each task, with accurate predictions assigned lower energy. This modeling choice formulates prediction as a search process on the energy landscape, enabling zero-shot generalization to new constraints by adapting the energy landscape. In addition, this allows us to generalize to entirely new distributions of tasks in a zero-shot manner by composing multiple learned energy landscapes together. In this dissertation, we first introduce a set of techniques to train energy landscapes and an algebra in which we can compose and discover composable energy landscapes. Next, we illustrate how energy landscapes can be composed in a diverse set of ways, ranging from logical operators, probability distributions, graphical models, constraints, and hierarchical compositions, enabling effective generalization across vision, decision-making, multimodal, and scientific settings.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158938</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>(De)fluorination of Organic Substrates Mediated by Nontrigonal Phosphorus Triamide</title>
<link>https://hdl.handle.net/1721.1/158937</link>
<description>(De)fluorination of Organic Substrates Mediated by Nontrigonal Phosphorus Triamide
Lim, Soohyun
Due to its high electronegativity and small size, fluorine atoms form the strongest single bond to carbon, and impart unique physical, chemical, and physiological properties to organic compounds. Therefore, the number of industrially synthesized products containing fluorine has seen a substantial increase in recent decades. The strategies to access organofluorine compounds include two opposite approaches: 1) (nucleophilic, electrophilic, or radical) fluorination, and 2) selective defluorination of polyfluorinated substrates. Both creating and breaking C−F bonds in selective manners are of great importance, and present challenges of their own. The work herein describes chemical transformations incorporating the cleavage or formation of the C−F bonds mediated by nontrigonal phosphorus triamide. Thanks to the enhanced biphilicity resulting from geometric deformation, the Cs-symmetric tricoordinate phosphorus compound can activate strong covalent bonds.&#13;
&#13;
At the outset, Chapter 1 reviews the existing literature on (de)fluorinative chemical transformations focused on deoxyfluorination and hydrodefluorination, as well as examples of nontrigonal tricoordinate phosphorus compounds and their characteristic reactivity. Combining the two approaches, Chapters 2 and 3 introduce method development for accessing organofluorine compounds using a butterfly-shaped phosphorus triamide as a bond activator. In Chapter 2, the method for deoxyfluorination of aliphatic alcohol substrates via O−H activation by phosphorus, catalyzed by borane Lewis acids, is detailed. The scope of the method covers tertiary alkyl fluorides, which are generally challenging targets, selectively yielding stereoinversion products for chiral substrates. Chapter 3 reports a closed P(III)/P(V) synthetic cycle for the hydrodefluorination of polyfluoroarene substrates that consists of C−F oxidative addition, F-to-H ligand metathesis, and C−H reductive elimination. The overall sequence is analogous to transition metal-catalyzed aryl cross-coupling reactions. Taken together, the methods described in this dissertation highlight the potential of nontrigonal phosphorus compounds as a mediator for the manipulation of strong covalent bonds, useful in the development of synthetic methods that complement existing ones.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158937</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing Telecom Band-Compatible Molecular Color Centers for Quantum Networking</title>
<link>https://hdl.handle.net/1721.1/158936</link>
<description>Developing Telecom Band-Compatible Molecular Color Centers for Quantum Networking
Greer, Rianna Bliss
Quantum networking is a new modality of information transmission that will revolutionize the future of telecommunications. However, the realization and widespread use of quantum networking demands low signal loss and distortion over long distances. To achieve this, prospective materials for quantum networking must emit in fiber optics’ optical communications band defined as 1260 to 1625 nm, commonly known as the “telecom band.” Vanadium dopants in silicon carbide have demonstrated near-infrared emission combined with a spin-photon interface, but these systems lack tunability over emission wavelength, preventing emission in the telecom band. This thesis combines the promising electronic structure of these dopants and the inherent tunability of molecular systems to create a family of luminescent paramagnetic vanadium complexes that can achieve both telecom band emission and generalized finetuned control over emission wavelength. Chapters 2 and 3 will outline approaches to target telecom band emission in a series of V_III complexes through a gradual and controlled increase of metal-ligand bonding covalency. This strategy culminates in a series of V_III complexes which tune emission wavelength from 1237 nm to 1424 nm, achieving emission into the telecom band. Chapter 4 will discuss the impact of these strategies on the magnetic properties and spin dynamics of these systems through an analysis of their behavior under high-frequency high-field EPR spectroscopy. This work provides a blueprint for the next generation of molecular spins with optical addressability in the near-infrared regime for applications in quantum networking.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158936</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining the Influence of Host Cell Proteostasis Networks and&#13;
Temperature on Influenza Evolution</title>
<link>https://hdl.handle.net/1721.1/158935</link>
<description>Defining the Influence of Host Cell Proteostasis Networks and&#13;
Temperature on Influenza Evolution
Patrick, Jessica
Viruses accumulate mutations and evolve more rapidly than any domain of life. Not only does the random acquisition of mutations drive this high evolutionary rate, but constant pressure from the host also contributes. As minimalistic pathogens, viruses rely on host machineries to synthesize, fold, and degrade their proteins. These proteostasis machineries can influence the accessible sequence landscape of viral proteins, and thus shape their evolution. Furthermore, the entire viral replication cycle takes place within the host cell. Therefore, the environment of the host, including factors such as temperature, can influence the evolutionary trajectory of viral proteins. The overarching goal of my thesis work is to better understand the influence of the host cell environment, with a particular focus on the proteostasis networks and the temperature of the cell.&#13;
My first project uses deep mutational scanning to elucidate the roles of the host proteostasis networks in defining influenza hemagglutinin’s evolutionary ability. My second project takes a similar approach to investigate how high or low temperature impact the accessible sequence space of HA. My third project combines both proteostasis network and temperature perturbations to investigate how the host cell environment can impact HA’s ability to escape neutralizing antibodies. My final project leverages the high mutation rate of influenza to study the phenomenon of error catastrophe, and the impact of altered proteostasis network environments on buffering the effect of mutations. Together, these studies clearly define a role for both the host proteostasis networks as well as temperature environment in determining influenza’s accessible sequence space, currently underappreciated factors in predicting how viruses evolve to evade selection pressures and a critical component to consider for successful vaccine and drug development as well as pandemic preparedness.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158935</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Near-Optimal Learning and Planning in Separated Latent MDPs</title>
<link>https://hdl.handle.net/1721.1/158934</link>
<description>Near-Optimal Learning and Planning in Separated Latent MDPs
Chen, Fan
We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of δ-separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp statistical threshold for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158934</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>VoxelPrompt: A Vision-Language Agent for Grounded Medical Image Analysis</title>
<link>https://hdl.handle.net/1721.1/158933</link>
<description>VoxelPrompt: A Vision-Language Agent for Grounded Medical Image Analysis
Hoopes, Andrew
We present VoxelPrompt, an agent-driven vision-language framework that tackles diverse radiological tasks through joint modeling of natural language, image volumes, and analytical metrics. VoxelPrompt is multi-modal and versatile, leveraging the flexibility of language interaction while providing quantitatively-grounded image analysis. Given a variable number of 3D medical volumes, such as MRI and CT scans, VoxelPrompt employs a language agent that iteratively predicts executable instructions to solve a task specified by a natural language input prompt. These instructions communicate with a vision network to encode image features and generate volumetric outputs (e.g., segmentations). VoxelPrompt interprets the results of intermediate instructions and plans further actions to compute discrete measures (e.g., tumor growth across a series of scans) and present relevant outputs to the user. We evaluate this framework on diverse neuroimaging tasks and show that the single VoxelPrompt model can delineate hundreds of anatomical and pathological features, measure many complex morphological properties, and perform open-language analysis of lesion characteristics. VoxelPrompt carries out these objectives with accuracy similar to that of fine-tuned, single-task models for segmentation and question-answering, while facilitating a large range of tasks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158933</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sub-Bottom Profiling Using an Autonomous Underwater Vehicle Equipped With a Sound Source and Towed Hydrophone Array</title>
<link>https://hdl.handle.net/1721.1/158932</link>
<description>Sub-Bottom Profiling Using an Autonomous Underwater Vehicle Equipped With a Sound Source and Towed Hydrophone Array
Pfenninger, Paige
Sub-bottom profiling using an autonomous underwater vehicle equipped with a source and a towed array is an excellent method to finely survey large areas of the ocean bottom with minimal interference from the water column. This approach has the benefit of being able to determine the range dependence of the sub-bottom on a meter-by-meter scale rather than assuming constant sub-bottom properties over a large range. This thesis conducts theoretical and experimental studies to investigate the feasibility of using the arrival times of acoustic signals from an autonomous underwater vehicle source to a short, 16-element towed hydrophone array to determine the sound speed and layer thickness of the seabed through Bayesian geoacoustic inversion. This method provides range-dependent geoacoustic parameters with a resolution on the order of 10 meters. Numerical studies indicate that, for timing data with low variance, arrival times can be used to accurately estimate seabed properties. However, the performance of the Bayesian inversion model deteriorates as the variance of the timing data increases. Experimental data were collected during the Seabed Characterization Experiment at the New England Mud Patch and the New England Shelf Break. This thesis attempts to improve the arrival times through the use of sub-array focusing but concludes that this method is not feasible due to the experimental data exhibiting a high level of variance in the sub-bottom timing returns, likely due to the presence of scatterers in the sediment layer. Therefore, the mean and variance of the direct path, bottom, and sub-bottom timing returns were calculated using Gaussian process regression. Furthermore, the results show that layer thickness and sound speeds are highly coupled, making it challenging to uniquely determine seabed properties.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158932</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design-technology Co-optimization for Sub-2 nm Technology Node Based on 2D Materials</title>
<link>https://hdl.handle.net/1721.1/158931</link>
<description>Design-technology Co-optimization for Sub-2 nm Technology Node Based on 2D Materials
Yao, Aijia
Emerging disruptive technologies such as Artificial Intelligence (AI) and 6G communications have driven stringent demands for hardware components that enable faster and more energy-efficient computation. With the diminishing returns of traditional silicon-based scaling and the escalating complexity of advanced semiconductor processes, two-dimensional (2D) materials offer promising opportunities when developed through Design-Technology Co-Optimization (DTCO). This thesis presents a comprehensive study of DTCO with a novel framework tailored for 2D material-based electronics that addresses critical challenges in material synthesis, device design, and circuit integration. In this framework, experimental material and device data are integrated into the design and optimization of MoS₂-based multichannel transistors (MCTs). With the help of DTCO, we have achieved record performance for double-gate, single-channel MoS₂ transistors as well as the first demonstration of high-performance, functional double channel MoS₂ transistors. Based on the results of MCTs, a Process Design Kit (PDK) is developed to facilitate circuit-level integration. These advancements constitute a promising foundation for the development of next-generation electronics beyond sub-2 nm technology node.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158931</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Encoder-Agnostic Learned Temporal Matching for Video Classification</title>
<link>https://hdl.handle.net/1721.1/158930</link>
<description>Encoder-Agnostic Learned Temporal Matching for Video Classification
Ho, Darryl
In recent years, large transformer-based video encoder models have greatly advanced stateof-the-art performance on video classification tasks. However, these large models typically process videos by averaging embedding outputs from multiple clips over time to produce fixed-length representations. This approach fails to account for a variety of time-related features, such as variable video durations, chronological order of events, and temporal variance in feature significance. While methods for temporal modeling do exist, they often require significant architectural changes and expensive retraining, making them impractical for offthe-shelf, fine-tuned large encoders. To overcome these limitations, we propose DejaVid, an encoder-agnostic method that enhances model performance without the need for retraining or altering the architecture. Our framework converts a video into a variable-length temporal sequence of embeddings, which we call a multivariate time series (MTS). An MTS naturally preserves temporal order and accommodates variable video durations. We then learn pertimestep, per-feature weights over the encoded MTS frames, allowing us to account for variations in feature importance over time. We introduce a new neural network architecture inspired by traditional time series alignment algorithms for this learning task. Our evaluation demonstrates that DejaVid substantially improves the performance of a state-of-the-art large encoder, achieving leading Top-1 accuracy of 77.2% on Something-Something V2, 89.1% on Kinetics-400, and 88.6% on HMDB51, while adding fewer than 1.8% additional learnable parameters and requiring less than 3 hours of training time.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158930</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing Transformer Key-Value Cache Size with Cross-Layer Attention</title>
<link>https://hdl.handle.net/1721.1/158929</link>
<description>Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Brandon, William
Key-value (KV) caching plays an essential role in accelerating decoding for transformer-based autoregressive large language models (LLMs). However, the amount of memory required to store the KV cache can become prohibitive at long sequence lengths and large batch sizes. Since the invention of the transformer, two of the most effective interventions discovered for reducing the size of the KV cache have been Multi-Query Attention (MQA) and its generalization, Grouped-Query Attention (GQA). MQA and GQA both modify the design of the attention block so that multiple query heads can share a single key/value head, reducing the number of distinct key/value heads by a large factor while only minimally degrading accuracy. In this work, we show that it is possible to take Multi-Query Attention a step further by also sharing key and value heads between adjacent layers, yielding a new attention design we call Cross-Layer Attention (CLA). With CLA, we find that it is possible to reduce the size of the KV cache by another while maintaining nearly the same accuracy as unmodified MQA. In experiments training 1B- and 3B-parameter models from scratch, we demonstrate that CLA provides a Pareto improvement over the memory/accuracy tradeoffs which are possible with traditional MQA, potentially enabling future models to operate at longer sequence lengths and larger batch sizes than would otherwise be possible.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158929</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Designing Efficient Systems and Algorithms for Sparse and&#13;
Quantized Deep Learning Computing</title>
<link>https://hdl.handle.net/1721.1/158928</link>
<description>Co-Designing Efficient Systems and Algorithms for Sparse and&#13;
Quantized Deep Learning Computing
Tang, Haotian
Deep learning models are becoming increasingly complex, expanding from 1D text and 2D images to 3D point clouds, while their size continues to grow exponentially. This trend highlights the need for greater efficiency. This thesis systematically explores efficiency in two resource-intensive domains—autonomous driving and generative AI—by focusing on fundamental model compression techniques: sparsity and quantization, alongside the co-optimization of systems and algorithms. Sparsity is crucial for autonomous vehicle (AV) applications. LiDAR processing, which requires 3D sparse computation, is inefficiently handled by current GPU libraries, creating a performance bottleneck in AV perception. To address this, we propose TorchSparse++, a high-performance GPU system for 3D sparse convolution, achieving 1.7-3.3× speedups over state-of-the-art libraries. Additionally, we introduce BEVFusion, an efficient multi-sensor fusion framework that fuses information in bird’s-eye-view (BEV) space, reducing computation by 1.9× while enhancing accuracy compared to prior methods. Generative AI is constrained by the massive size of models, necessitating quantization for efficient deployment. This thesis presents two GPU systems for accelerating large language models (LLMs): TinyChat for edge LLM deployment and QServe for cloud-based LLM serving. TinyChat boosts edge LLM inference by 3× using activation-aware weight quantization (AWQ). QServe further improves performance with activation and KV cache quantization, enhancing the throughput of NVIDIA TensorRT-LLM by 1.2-2.4× on A100 GPUs. Finally, we introduce HART, an efficient autoregressive image generation method that achieves 4.5-7.7× higher throughput compared to diffusion models while maintaining visual quality. HART achieves this improvement by leveraging quantized, or discrete, visual tokens to capture the high-level structure of images, while a lightweight diffusion model is used for fast inference of finer details.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158928</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building World Models with Neural Physics</title>
<link>https://hdl.handle.net/1721.1/158927</link>
<description>Building World Models with Neural Physics
Ma, Pingchuan
World models learn the dynamics of environments in a data-driven manner, enhancing performance and efficiency in downstream tasks such as control, design, recognition, and generation, thanks to cost-effective simulation and differentiability. A pre-trained world model should ideally (1) accurately simulate ground-truth dynamics, (2) adapt easily to novel configurations, and (3) generalize across diverse physical effects. Previous attempts in this area have either utilized differentiable model-based physics with few parameters exposed or trained for specific scenarios with minimal physical priors integrated. These world models fall short of their objectives, limiting their applicability in real-world accuratecritic deployments and scalability to larger pre-trained world models. In this thesis, we aim to build world models with neural physics, a hybrid neural-physics framework that models the basic dynamics with differentiable physics while learning all additional modules through neural networks. By integrating neural physics, the world models adhere closely to physical principles while efficiently learning diverse effects. The modular structure of neural physics allows world models to generalize to novel configurations simply by installing different pretrained neural modules. We will demonstrate the effectiveness of this novel framework in applications such as reconstruction, robotic control, and scientific discovery.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158927</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Sensing of N-Nitrosodimethylamine and Methane</title>
<link>https://hdl.handle.net/1721.1/158926</link>
<description>Chemical Sensing of N-Nitrosodimethylamine and Methane
Feng, Haosheng
In Chapter 1, an introduction to chemical sensing is presented. Several modalities are introduced, including optical, gravimetric as well as chemiresistive together with brief in-troductory backgrounds. Subsequently, metrics to assess sensor performance are sum-marized. Finally, some strategies to combat interferants during chemical sensing are dis-cussed.&#13;
&#13;
In Chapter 2, published work on a luminescent method to determine levels of N-nitrosamines is presented. This work involved the synthesis of five phosphines bearing N-heterocycles, followed by coordination with Cu(I) to give luminescent complexes. Emission spectra spanned the visible range, demonstrating the tuneability of these compounds. The complexes’ interactions with N-nitrosamines were also examined through spectroscopy and crystallography.&#13;
&#13;
In Chapter 3, development of free-volume promoting monomers and catalysts for in-sertion polymerization is demonstrated. Insertion polymerized material was compared to that synthesized using Ring Opening Metathesis Polymerization (ROMP), showing that the former had superior properties for methane detection through higher surface areas and po-rosity.&#13;
&#13;
In Chapter 4, the structure activity relationship of components within a previously pub-lished methane sensing assembly was thoroughly examined to identify how changes in humidity levels influenced sensing response. Poly-4-vinylpyridine modification was per-formed under flow conditions, while the chemical composition of the polyoxometalate (POM) component was also varied. Humidity was determined to most significantly affect the POM and influence the electrical contact between carbon nanotubes and gold.&#13;
&#13;
Finally, Chapter 5 presents several modifications of the parent porous framework out-lined reported in Chapter 3. A soluble monomer bearing adamantyl substituents was suc-cessfully synthesized by attachment of isopropyl units. Its propensity to participate in inser-tion polymerization was then examined. Sulfonation and nitration of the parent polymer I-AntN were also conducted and the product analyzed. Attempts at copolymerization of AntN with CO were also described.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158926</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Scalable Quantum Systems From First-Principles to Large-Scale Control</title>
<link>https://hdl.handle.net/1721.1/158925</link>
<description>Engineering Scalable Quantum Systems From First-Principles to Large-Scale Control
Harris, Isaac B. W.
Color centers in solids are promising platforms for quantum communication, sensing, and computing, featuring highly coherent optical transitions, as well as native electron and nuclear spins that can be used as quantum memories. Existing state-of-the-art demonstrations have shown that multi-qubit control, spin-photon entanglement, and heralded entanglement are possible with devices consisting of a few color centers. However, the path to scaling the number of color centers integrated in these devices to the thousands or millions needed for advanced quantum networking and computing applications remains unclear. In particular, the requirement for highly coherent quantum operations both necessitates operation at cryogenic temperatures, and precise classical control signals delivered to each color center. Precise qubit control greatly increases the system complexity, while the cryogenic operation limits the amount of power that the system can dissipate. Both factors severely limit the number of color centers that can realistically be included in a single device using existing methods. This work will tackle the scaling problem from a system-level perspective from two directions. Firstly, I will quantify performance trade-offs between coherence, temperature, and optical properties of the group-IV color centers. A novel color center system, the ¹¹⁷SnV⁻ hyperfine color center, will be presented and its advantages compared to traditional group-IV color centers will be explored. Secondly, a method to integrate color centers with application specific integrated circuit (ASICs) will be demonstrated. The ASICs provides multiplexed control signals and increased control field efficiency, thus decreasing both the wiring complexity and thermal load per qubit. This work will thus pave the way to color center-based devices in which the number of qubits is not limited by the complexity or power dissipation of the control system.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158925</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Pointing for the CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link>https://hdl.handle.net/1721.1/158924</link>
<description>Precision Pointing for the CubeSat Laser Infrared CrosslinK (CLICK) Mission
Forester, Paige
Advances in Free Space Optical Communications have led to numerous missions that have demonstrated optical space-to-ground links, however, fewer missions have demonstrated optical space-to-space links. NASA’s CubeSat Laser Infrared CrosslinK (CLICK) Mission aims to be the first to demonstrate optical space-to-space communication on a CubeSat scale using Commercial Off the Shelf (COTS) components that include a micro electromechanical system (MEMS) fine steering mirror for precision pointing. The first phase of the CLICK mission, CLICK-A, launched in September 2022 to demonstrate optical downlink. The second phase, CLICK-B/C, aims to demonstrate optical crosslink between two spacecraft: CLICK-B and CLICK-C. Optical crosslink communication requires precision pointing for both spacecraft to close the link. The development of the CLICK-B/C Fine Pointing, Acquisition, and Tracking (PAT) is presented in this thesis, as well as the analysis of disturbance rejection and evaluation of expected spacecraft disturbances. This thesis also asses the slewing required for differential drag control which is used to maintain the crosslink range between the two CubeSats. Preliminary results are presented from the CLICK-B/C flight hardware integration and testing phases, as well as findings from simulation of the lasercom payload’s performance.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158924</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information-centric Algorithms for Feature Extraction in High-Dimensional Sequential Data</title>
<link>https://hdl.handle.net/1721.1/158923</link>
<description>Information-centric Algorithms for Feature Extraction in High-Dimensional Sequential Data
Jin, Jiejun
Hidden Markov Models (HMMs) are a cornerstone of sequential data analysis, offering a robust framework for modeling observable events influenced by hidden internal states. With applications spanning speech recognition, video analysis, bioinformatics, and financial time series, HMMs enable the prediction and classification of raw data by leveraging their dual-layer stochastic structure: hidden Markov states and observable outputs. However, as real-world data grows increasingly high-dimensional, extracting meaningful features from observations becomes critical to reduce computational complexity while retaining relevant information.&#13;
&#13;
This thesis addresses key challenges in feature extraction for high-dimensional HMMs. Current methods, such as neural networks (NNs), are widely used for nonlinear feature learning but lack mechanisms to prioritize useful features or incorporate known structural constraints. To bridge this gap, this work proposes novel algorithms to decouple representation learning from task-specific objectives and extract features aligned with predefined constraints.&#13;
&#13;
The theoretical foundation, including local information geometry and Hirschfeld-Gebelein-Rényi (HGR) maximal correlation, is introduced in Chapter 2. Chapter 3 details three innovative feature extraction algorithms and their corresponding neural network architectures, highlighting their strengths and limitations. Convergence analyses and tail bounds for these methods are presented in Chapter 4. Numerical simulations validating the efficacy of the proposed approaches are provided in Chapter 5, while Chapter 6 concludes with a summary of contributions and potential future research directions.&#13;
&#13;
This thesis advances the field by offering structured, constraint-aware feature extraction techniques tailored for high-dimensional sequential data, setting the stage for more effective and interpretable inference in HMMs.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158923</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Solving Larger Games: Designing New Algorithms Adaptable to Deep Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/158922</link>
<description>On Solving Larger Games: Designing New Algorithms Adaptable to Deep Reinforcement Learning
Liu, Mingyang
In this thesis, we explore the design of algorithms capable of handling large games where the state space is too large to store strategies in a tabular format from a theoretical perspective. Specifically, we focus on developing algorithms suitable for deep reinforcement learning in two-player zero-sum extensive-form games. There are three critical properties for effective deep multi-agent reinforcement learning: (last/best) iterate convergence, efficient utilization of stochastic trajectory feedback, and theoretically sound avoidance of importance sampling corrections. Chapter 3 introduces Regularized Optimistic Mirror Descent (Reg-OMD), which provably converges to the Nash equilibrium (NE) linearly in last-iterate. Chapter 4 shows that algorithms based on regret decomposition enjoy best-iterate convergence to the NE. Chapter 5 proposes Q-value based Regret Minimization (QFR), which achieves all three properties simultaneously.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158922</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymer Deconstructability and Recyclability via Introduction of Cleavable Si−O Bonds</title>
<link>https://hdl.handle.net/1721.1/158921</link>
<description>Polymer Deconstructability and Recyclability via Introduction of Cleavable Si−O Bonds
Johnson, Alayna
The synthesis of a new polysilylether via entropy-driven ring-opening metathesis polymerization (ED-ROMP) of cyclic bifunctional silyl ether-based monomers is reported. High molecular weight polymers (up to 100 k) with narrow dispersities were achieved at modest temperature. These polymers display excellent thermal stability and ultra-low T_g (–88 ºC). The polymers are both rapidly deconstructable via the cleavage of the labile silicon-oxygen linkages with either acid or fluoride triggers and partially depolymerizable by the addition of exogenous metathesis catalyst. Analysis of the deconstructed polymer products provided insight into the polymer microstructure, showing that the ED-ROMP process was regiorandom. Altogether, this work offers a new class of deconstructable polymers with a range of potential applications. Incorporation of these bifunctional silyl ether-based monomers into copolymers could aid in the triggered deconstruction of otherwise nondegradable hydrocarbon backbones.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158921</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Analysis of Voltage Feasibility Problems for&#13;
Cost-Effective Microgrids</title>
<link>https://hdl.handle.net/1721.1/158920</link>
<description>Modeling and Analysis of Voltage Feasibility Problems for&#13;
Cost-Effective Microgrids
Jones, Aaron Jerome
Global efforts to mitigate climate change have led to a significant increase in the integration of renewable energy resources into the electricity grid. This transition not only necessitates the adoption of renewable energy technologies but also requires rethinking and redesigning existing power grid infrastructures to accommodate the unique characteristics of these resources. This research focuses on modeling techniques which can assist in analyzing the feasibility of microgrid topologies. Microgrids have emerged as a flexible and efficient approach to implementing novel grid topologies that support higher levels of renewable energy penetration. They also support the integration of distributed energy resources (DERs), such as photovoltaic (PV) systems, thereby promoting a more sustainable and efficient energy grid design. This thesis utilized sanitized load and system topology data from a real world microgrid located in Illinois to test the feasibility of increasing the number of PV units the system can utilize for reactive power support. &#13;
&#13;
In these systems, ensuring feasibility is a crucial concern due to power mismatches caused by the inherent variability of renewable resources. This work focuses of maintaining voltage within the constraints while increasing PV penetration on the system. We simulate the implementation of microgrids with PV generation using Alternating Current Optimal Power Flow (AC-OPF). The results of this thesis show the limits of feasible reactive power support from distributed PV units on a utility disconnected microgrid based on our voltage constraints. The study shows that there exists a limit to reactive power support provided by distributed PV units. Beyond this limit we see voltage collapse shown as infeasibility of power flow solutions. In order to avoid this problem we optimize the reactive power support from PV so that a solution exists within the constraints. The lesson learned for practical use of this result is that operators should use AC-OPF to compensate for reactive power using PV. Future research will explore the challenges and opportunities associated with the widespread adoption of microgrids, such as dynamic voltage instabilities that can occur with high levels of PV integration and complexities in inverter control strategies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158920</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Engineering of Protected Superconducting&#13;
Qubits</title>
<link>https://hdl.handle.net/1721.1/158919</link>
<description>Design and Engineering of Protected Superconducting&#13;
Qubits
Kim, Junghyun
Building extensible quantum information processors becomes increasingly promising as the qubits exhibit longer coherence times. To this end, realizing protected qubits, whose Hamiltonians are inherently resilient to both relaxation and dephasing, has attracted strong interest. In this thesis, we primarily explore the soft 0 − π qubit, a leading candidate for implementing superconducting qubit protection with current fabrication techniques. To enhance protection, the soft 0 − π qubit requires its two major modes, the charge-mode (θ) and the flux-mode (ϕ), to satisfy an asymmetric condition: maximizing charge-mode capacitance while minimizing flux-mode capacitance. The main challenge is therefore reducing stray capacitance from the large charge-mode capacitor, which hinders the reduction of flux-mode capacitance. To address this challenge, we depart from the conventional coplanar interdigitated capacitor design and use parallel-plate capacitors (PPC) with small footprints, achieving the desired large charge-mode capacitance while reducing unwanted stray capacitances. By reducing the capacitor area by a factor of approximately 50, the PPC 0−π qubit has achieved an estimated Eᵠ_C /Eᶿ_C ratio of 30–50, placing it among the highest reported. Additionally, we propose enhanced mode-selective control of the soft 0−π qubit using these parallel-plate capacitors. Finally, we discuss the remaining challenges of the soft 0−π qubit and introduce alternative parameter regimes that can potentially improve Raman-based control and qubit readout.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158919</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Distributed Deep Neural Network Training&#13;
and Fine-Tuning Through Resource Interleaving</title>
<link>https://hdl.handle.net/1721.1/158918</link>
<description>Accelerating Distributed Deep Neural Network Training&#13;
and Fine-Tuning Through Resource Interleaving
Rajasekaran, Sudarsanan
The ever-growing increase in dataset and model sizes of deep learning has created a massive demand for efficient GPU clusters. As the number of GPUs increases, the communication overhead of distributed Machine Learning (ML) training and fine-tuning workloads quickly takes up a significant portion of iteration time. Yet state-of-the-art ML schedulers tend to ignore the communication pattern of ML jobs when placing workers on GPUs. This thesis advocates for communication-aware resource scheduling as a critical approach to optimizing network utilization in ML clusters. We introduce a key idea for accelerating Deep Neural Network (DNN) jobs that interleaves the communication demands of different jobs sharing a network link. To illustrate this concept of interleaving, we first demonstrate how intentionally creating unfairness in bandwidth share between different DNN jobs improves their iteration times. Building on this insight, we present two novel systems designed to minimize network congestion and accelerate DNN training and fine-tuning jobs. The first system, Cassini, achieves interleaving using a centralized approach. In contrast, the second system, MLTCP, achieves the same goal using a distributed approach. Both systems are practical and readily deployable, depending on the service provider’s preference on deploying centralized or distributed solutions. In particular, Cassini, is a centralized network-aware job scheduler for ML clusters. Cassini introduces a novel geometric abstraction to consider the communication pattern of different jobs while placing them on network links. To do so, Cassini uses an Affinity graph that finds a series of time-shift values to adjust the communication phases of a subset of jobs such that the communication patterns of jobs sharing the same network link are interleaved with each other. Second is MLTCP, a distributed technique to approximate an interleaved centralized flow schedule. At the heart of MLTCP lies a straight-forward principle based on a key conceptual insight: by scaling the congestion window size (or sending rate) based on the number of bytes sent at each iteration, MLTCP flows eventually converge into a schedule that reduces network contention. To evaluate these systems, we conduct experiments using real-world DNN models on a testbed with Nvidia A100 GPUS. Cassini and MLTCP improve the average iteration times by up to 1.6× and 1.9×, respectively, demonstrating their effectiveness in reducing network congestion and accelerating ML workloads.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158918</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Fleet Learning From Heterogeneous Data</title>
<link>https://hdl.handle.net/1721.1/158917</link>
<description>Robot Fleet Learning From Heterogeneous Data
Wang, Lirui
One of the key roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. Similar to humans, robots and embodied agents inherently have to deal with heterogeneous inputs and outputs due to the nature of the perception-action loops across diverse environments. The data format and distributions collected from these systems and used for training them are varied in different modalities such as color, depth, tactile, and proprioceptive information, and/or collected in different domains such as simulation, real robots, and human videos. Moreover, fleets of robots and machines ingest massive amounts of streaming data generated by interacting with their environments in a distributed fashion, and teams of robots shall co-acquire diverse skills through their experiences in varied settings. The core idea behind my research, fleet learning, is to embrace the heterogeneous nature of robot learning to develop efficient and general algorithms. In this thesis, I will present a few angles toward tackling such challenging problems and application domains, ranging from tokenizing data, aligning representations, and merging policies, to composing skills. We develop insights and theories, often from linear settings, for how fleet learning can lead to more principled and effective use of robotic data and propose algorithmic progress, often through alignments, toward building generalist robotic foundation models. Empirically, we show advanced robotic manipulation capabilities by leveraging data from multimodal sensory inputs and multiple domains. In addition to outperforming several previous state-of-the-art across simulation and real-world benchmarks, we develop intelligent systems for robotic applications such as package handling in warehouses as well as dexterous tool-use tasks that have applications such as manufacturing, logistics, and household robots.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158917</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating the Discovery of Novel Metal Organic Chalcogenolates: A Computational and Machine Learning-Driven Approach</title>
<link>https://hdl.handle.net/1721.1/158916</link>
<description>Accelerating the Discovery of Novel Metal Organic Chalcogenolates: A Computational and Machine Learning-Driven Approach
Ladera, Adriana J.
Metal Organic Chalcogenolates (MOChas) are a class of robust, self-assembling, and hybrid materials featuring inorganic metalo-chalcogen frameworks that are scaffolded by organic ligands. These low-dimensional structures exhibit tunable optoelectronic properties, making them promising candidates for various applications, including optical sensors and nanotechnology. This tunable relationship between MOCha structural arrangements and targeted properties opens up a vast yet challenging search space for novel MOCha structures. Density Functional Theory (DFT) can predict properties of materials with good accuracy, making it a powerful choice for even hypothetical materials. However, the discovery of novel MOChas structures is constrained by poor scalability of DFT relaxation times for large systems and a lack of high-throughput design methods that can capture the complex geometries of MOChas. In this work, we employ DFT calculations to investigate the energetic and electronic properties of various MOChas, and provide insight into the optical behavior and kinetic favorability of such structures. To address the computational bottlenecks of high-throughput design and DFT workloads, we discuss the use of machine-learned interatomic potentials and various generative models that can enable rapid prototyping of novel MOCha structures.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158916</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of foundation models for molecular representation in cancer drug discovery and precision oncology</title>
<link>https://hdl.handle.net/1721.1/158915</link>
<description>Application of foundation models for molecular representation in cancer drug discovery and precision oncology
Khokhlov, Khrystofor
Drug discovery is a resource-intensive and time-consuming process, often requiring decades of effort and substantial financial investment, with a high risk of failure. Despite advances in high-throughput screening technologies, the size of chemical space presents a significant challenge: it is not feasible to experimentally screen all potential drug-like molecules. Most commercially available chemical libraries consist of molecules that are synthesized on demand from pre-existing building blocks, further limiting the exploration of novel chemotypes. This thesis aims to explore whether drug discovery could be accelerated by leveraging advances in deep learning (DL) models to identify promising hit candidates and improve the prediction of drug response in cancer. Development of cancer drugs that will be effective on a predictable set of targets remains a major challenge. We are developing a DL model capable of identifying potentially novel cancer drug chemotypes and reliably predicting drug response on cancer cell line targets. Leveraging recent progress in transformer-based architectures and graph neural networks, we use molecular language models, graph models and cell foundation models to embed both molecular and genomic data into low-dimensional subspaces and then use standard machine learning (ML) tools in these low-dimensional spaces to predict the efficacy of the molecules in particular cell lines. We utilize the large-scale drug repurposing and oncology datasets from the PRISM project at the Broad Institute, which provide a wealth of drug repurposing and oncology data, enabling robust training of ML models. We show that these vector embeddings are superior to existing methods, as they enable more accurate drug response predictions. The first part of this thesis is dedicated to development of a deep learning cancer drug discovery model, focused on in silico screening of chemical space to search for cancer drug candidates. The second part is focused on development of a precision oncology model, based on a multichannel neural network architecture. Our pipeline involves training single-target models on drug molecular structures, followed by integrating genomic data to enhance biological context and train a hybrid model capable of predicting drug response for novel drug:target pairs. Our results demonstrate that vector embeddings produced by the proposed framework outperform existing approaches, offering a more accurate and efficient means of exploring chemical space. This work highlights the transformative potential of ML/DL methods in drug discovery, enabling targeted, cost-effective exploration of chemical libraries, and advancing the development of precision oncology treatments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158915</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Dual Extruder Biomaterial 3D Printer</title>
<link>https://hdl.handle.net/1721.1/158914</link>
<description>Development of Dual Extruder Biomaterial 3D Printer
de Alva, Jesse P.
This research presents the design and fabrication of a novel dual-extruder biotic 3D printer for the precise deposition of natural biocomposites using organic materials such as pectin, chitosan, and cellulose. Unlike traditional FDM printers that rely on thermoplastic extrusion, this printer employs a syringe-based mechanical extruder capable of depositing viscous biomaterial hydrogels. The integration of a first-of-its-kind dual-extruder system enables the fabrication of multi-material prints and the exploration of biomaterial composites and complex geometric structures, thereby advancing sustainable, bio-inspired manufacturing.&#13;
This thesis emphasizes the machine engineering aspects of the printer's development, including project motivation, systematic design methodology, component design and fabrication, testing, and exploration of future work. Notable features of the system include user-friendly operation for non-experts, open-source accessibility, and compatibility with a wide range of biomaterials. By addressing existing limitations in biomaterial 3D printing technology, this work provides a robust platform to support future research in biomaterials, sustainable additive manufacturing, and bio-inspired design. Furthermore, the open-source nature of the printer fosters innovation and collaboration, accelerating the adoption of sustainable materials and manufacturing methods.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158914</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Annealing Techniques for Color Center Formation</title>
<link>https://hdl.handle.net/1721.1/158913</link>
<description>Annealing Techniques for Color Center Formation
Christen, Ian
Color centers in diamond have emerged as leading atom-like quantum systems for applications spanning from quantum repeaters to sensors. However, the optical and spin properties of engineered diamond color centers are limited by crystal damage produced during ion implantation, crystal irradiation, and annealing. In this thesis, we develop advanced material processing methods and characterization techniques to address critical challenges in the formation of high-performance diamond color centers to advance towards the efficient creation of desired dopant-vacancy centers with minimal formation of deleterious multi-vacancy clusters.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158913</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermally Hardened RF GaN HEMTs in Extreme Environments</title>
<link>https://hdl.handle.net/1721.1/158912</link>
<description>Thermally Hardened RF GaN HEMTs in Extreme Environments
Niroula, John
Traditional, room temperature electronics based on silicon has truly changed the world around us over the past 70+ years. However, many more applications still exist that are limited by the temperature performance of silicon devices (&lt;250◦C). This area of high temperature (HT) electronics is an increasingly growing field with critical future applications in geothermal energy, space exploration, hypersonic aircraft, and deep gas/oil drilling, among others. Gallium Nitride (GaN) high electron mobility transistors (HEMTs) are especially well suited for high temperature electronic applications due to their low intrinsic carrier concentration and excellent electrical properties. Despite great progress in HT GaN technology, most demonstrations target logic or mixed-signal applications, and the performance of radio-frequency (RF) GaN devices remains lacking at high temperatures despite the critical need for wireless communication systems and high-speed electronics for these high-temperature applications. In this thesis, we investigate the physics of GaN HEMT devices at high temperatures and design RF transistors that demonstrate record performance at these temperatures.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158912</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Rich Personalized Causal Inference</title>
<link>https://hdl.handle.net/1721.1/158911</link>
<description>Data-Rich Personalized Causal Inference
Shah, Abhin  .
There is a growing interest in individual-level causal questions to enable personalized decision-making. For example, what happens to a particular patient’s health if we prescribe a drug to them, or what happens to a particular consumer’s behavior if we recommend a product to them? Conducting large-scale randomized experiments to answer such questions is impractical—if not infeasible—due to cost, the level of personalization, or ethical concerns. Observational data offer a valuable alternative, but their lack of explicit randomization makes statistical analysis particularly challenging. In this thesis, we exploit the richness of modern observational data to develop methods for personalized causal inference. In the first part, we introduce a framework for causal inference using exponential family modeling. In particular, we reduce answering causal questions to learning exponential family from one sample. En route, we introduce a computationally tractable alternative to maximum likelihood estimation for learning exponential family. In the second part, we leverage ideas from doubly robust estimation to enable causal inference with black-box matrix completion under a latent factor model.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158911</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coarse Modality</title>
<link>https://hdl.handle.net/1721.1/158905</link>
<description>Coarse Modality
Flor, Enrico
One of the early successes of the application of possible worlds semantics to the analysis of natural language is Kratzer’s account of modality. A large part of the subsequent literature on modals has sought to expand the crosslinguistic coverage of that framework, and, in so doing, many new generalizations and constraints have been proposed and re-examined. The present dissertation situates itself within this tradition and makes both an empirical and theoretical contribution. Using the Italian adverb magari as the main empirical source, it will be argued that there exists a previously unnoticed type of modality which is referred to here as “coarse”. Its most evident manifestation is a special type of epistemic possibility, one that comes with an “antievidential” requirement. Antievidential possibility in assertions and questions is discussed in Chapters 1 and 3 respectively. Chapter 2 frames coarse modality as a more general phenomenon that comes about through modification of modal expressions. The theoretical argument of this dissertation is a novel corroboration of Kratzer’s premise semantics approach. It will be argued that the most natural and general account of coarse modality is possible by utilizing the premise set, a powerful resource of the system, in a novel way.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158905</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Responsible Computational Text Generation: AI Content Classification and Policy Framework</title>
<link>https://hdl.handle.net/1721.1/158904</link>
<description>Responsible Computational Text Generation: AI Content Classification and Policy Framework
Jung, Minseok
Recent advances in generative AI, particularly in producing human-like text, have blurred the lines between human and AI authorship. Since these AI tools rely on stochastic generation rather than traditional scientific reasoning, concerns about misinformation and reliability have emerged, highlighting the need for AI detection tools and policy guidelines. In response, this study proposes a dual approach: (1) the application of adaptive thresholds to improve the use of AI text detectors and (2) an AI policy framework based on user patterns and opinions. To enhance detector performance, we present a threshold optimization algorithm that adapts to diverse subgroups, such as those based on text lengths and stylistic features, thereby reducing discrepancies in error rates. The commonly used method relies on a single universal threshold, which has led to inconsistent results across various text types because of different probability distributions. Our approach addresses these shortcomings by tailoring thresholds to the specific characteristics of each group. In parallel, the study examines the pressing need for comprehensive AI guidelines, given the rise of misinformation and academic integrity issues. While a few institutions have introduced comprehensive policies, many institutes lack approaches grounded in user patterns and opinions. To remedy this problem, we propose a policy framework based on a user study. The findings of this research will provide practical solutions for more effective AI text classification and a reliable framework for the necessity of AI writing policies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158904</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering the link between twin-twin interactions and damage nucleation in an (α+β) Ti alloy</title>
<link>https://hdl.handle.net/1721.1/158903</link>
<description>Uncovering the link between twin-twin interactions and damage nucleation in an (α+β) Ti alloy
Cooper, Megan F. L.
Recently, a (α+β) Ti alloy was developed with an outstanding combination of both high strength and high ductility; however, the plasticity micromechanisms that lead to damage nucleation for this alloy had not yet been investigated in detail. In this work, post-mortem analysis and an in-situ SEM-EBSD tensile experiment were conducted to determine where damage was nucleating most frequently in the microstructure, and what deformation modes were associated with damage nucleation. Damage within primary α grains was found to be the most common, with most of these damage incidents occurring along {10̅12} twin-twin boundaries with a ~60° misorientation. The {10̅12} twinning mode is only activated in the localized neck, and twin activation is strongly dependent on initial crystallographic texture. The twinned domains are rotated such that prismatic slip is easier to activate, but prismatic slip transfer is unlikely across ~60° twin-twin boundaries due to geometric incompatibilities. The in-situ test revealed that a crack formed along a ~60° twin-twin boundary where slip was blocked. These findings provide new insights into how twin-twin interactions in Ti alloys can lead to damage nucleation and impact overall ductility.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158903</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Interpretation and Management for an Atmospheric Probe Mission to Venus</title>
<link>https://hdl.handle.net/1721.1/158902</link>
<description>Data Interpretation and Management for an Atmospheric Probe Mission to Venus
Apodaca Moreno, Maria Regina
After nearly 40 years without a dedicated U.S. mission to Venus, the Rocket Lab Mission to Venus is planning to launch a small probe to analyze the composition of Venus’ cloud layers. As the probe descends through the atmosphere, it will spend around five minutes in the cloud deck, from 66 km to 48 km above the surface, and roughly 20 minutes total in the atmosphere [French et al., 2022]. The probe’s primary scientific instrument, the Autofluorescence Nephelometer (AFN), will gather data by measuring the light scattering off particles, providing insight into their chemical composition based on refractive index and particle size [Baumgardner et al., 2022]. Unfortunately, the natural phenomena described by Mie scattering [Mie, 1908], the physics theory underpinning the AFN, holds that light scattering for a small solid angle is fundamentally degenerate: different combinations of refractive index and particle size can lead to identical light scattering. This degeneracy limits scientists’ ability to uniquely determine physical parameters of interest, leading some previous authors to rely upon helpful, but perhaps limiting, assumptions that mitigate this degeneracy. Complicating matters still further, the probe’s communication with Earth is subject to a strict data budget, limiting the amount of AFN measurements that may be used for analysis to begin with. This thesis addresses two important problems associated with the Rocket Lab Mission to Venus: 1) how to mitigate the light scattering degeneracy with minimal assumptions and 2) how to transmit valuable information within the limited data budget. To address the first problem, I introduce a data retrieval algorithm, based upon Bayesian statistical inference [Lindley, 1965], which combines a physical model of the instrument and a prior probability distribution describing each physical property. In some cases, this method can estimate the correct particle size and refractive index of a particle as the maximum likelihood value, from a single measurement even as it relaxes certain assumptions that were previously standard in the field, such as a small refractive index range. Using my data retrieval algorithm, I reanalyze the data collected by the Pioneer MultiProbe Mission to Venus’ nephelometers without the need for supplementary data from a different instrument [Ragent and Blamont, 1980]. I also provide new insight into the particle size and refractive index distributions seen by the Pioneer Mission’s small probes, which had not been possible with previous techniques. To address the second problem, I propose a data strategy for limited data missions like the Rocket Lab Mission to Venus. The method developed in this work relies upon Gaussian Mixture Models, which can efficiently represent multiple measurements as
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158902</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Concepts for High-Acceleration Linear Actuators&#13;
for Precision Motion</title>
<link>https://hdl.handle.net/1721.1/158901</link>
<description>Design Concepts for High-Acceleration Linear Actuators&#13;
for Precision Motion
Kim, Adam K.
Advances in semiconductor photolithography scanners have made it possible to produce smaller, more affordable chips with higher throughput. Some of the key lithographic scanner components supporting these advancements are electromagnetic actuators responsible for positioning the long-stroke (LS) and short-stroke (SS) stages of the reticle stage in its scan direction. Such actuators need to provide the highest thrust at the deceleration and reacceleration phases when the stages turn around at the ends of the scanning trajectory. Thus, enhancing their acceleration capability and force output is essential for boosting chip throughput. However, the improved performance may demand large current densities that are unsustainable in terms of the associated power dissipation generated by ohmic losses in the copper coils. In this thesis, we continued a previous study conducted in our lab that explored the use of mechanical contact forces managed by a piezoelectric stack actuator (PEA). In this configuration, intermittent contact by the PEA can be used to apply forces to decelerate and reaccelerate the SS stage with respect to the LS stage during turnaround events. With such force assist, the non-contact precision actuators responsible for positioning the SS stage with respect to the LS stage no longer need to generate large thrusts for the deceleration and reacceleration. As a result, we can in principle decrease the weight and power loss of the SS-stage precision actuators, which thus lowers the thrust requirements for the LS-stageAdvances in semiconductor photolithography scanners have made it possible to produce smaller, more affordable chips with higher throughput. Some of the key lithographic scanner components supporting these advancements are electromagnetic actuators responsible for positioning the long-stroke (LS) and short-stroke (SS) stages of the reticle stage in its scan direction. Such actuators need to provide the highest thrust at the deceleration and reacceleration phases when the stages turn around at the ends of the scanning trajectory. Thus, enhancing their acceleration capability and force output is essential for boosting chip throughput. However, the improved performance may demand large current densities that are unsustainable in terms of the associated power dissipation generated by ohmic losses in the copper coils. In this thesis, we continued a previous study conducted in our lab that explored the use of mechanical contact forces managed by a piezoelectric stack actuator (PEA). In this configuration, intermittent contact by the PEA can be used to apply forces to decelerate and reaccelerate the SS stage with respect to the LS stage during turnaround events. With such force assist, the non-contact precision actuators responsible for positioning the SS stage with respect to the LS stage no longer need to generate large thrusts for the deceleration and reacceleration. As a result, we can in principle decrease the weight and power loss of the SS-stage precision actuators, which thus lowers the thrust requirements for the LS-stage actuators responsible for accelerating both the LS and SS stages, resulting in lowered power consumption. Using the single degree-of-freedom experimental setup previously built in our lab, we conducted several characterization experiments to develop a PEA position feedback controller augmented by a hysteresis-compensated feedforward trajectory to shape the contact compression and forces. We find that introducing a viscoelastic contact interface is essential for stabilizing the PEA controller and slowing the contact dynamics to remain within the controller bandwidth. Our feedforward trajectory successfully brings a 0.84 kg mass moving towards the PEA with an initial speed of 60 mm/s to zero velocity in approximately 1.5 ms using 36 µm of PEA stroke length. These results demonstrate the feasibility of using PEAs as mechanical assist devices for high-acceleration turnaround events in lithography tools.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158901</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forecasting the lift of a randomly maneuvering airfoil&#13;
under dynamic stall conditions, Re ∼ 10⁵</title>
<link>https://hdl.handle.net/1721.1/158900</link>
<description>Forecasting the lift of a randomly maneuvering airfoil&#13;
under dynamic stall conditions, Re ∼ 10⁵
Kim, Donghyun
Dynamic stall is the abrupt flow separation from airfoils rapidly changing their orientation. This phenomenon, characterized by a delayed stall followed by a sharp drop in lift, has prompted efforts to prevent or delay it. This study aims to predict the lift of an airfoil randomly maneuvering under dynamic stall conditions by utilizing sparse surface pressure measurements, which we believe can maximize the effectiveness of various dynamic stall suppression techniques. Using data from large eddy simulations, we demonstrate that a long short-term memory network, fed with raw surface pressures, delivers accurate predictions. Also, a new method introduced here, IdDM, conclusively links the characteristic frequency range of pressure fluctuations that emerges during the dynamic stall to the chord-lengthscale vortex dynamics. However, further analysis suggests that the forecast predominantly relies on the lower frequency components tied to the airfoil motion, possibly because the vortex dynamics are dependent on and sensitive to the airfoil motion. Meanwhile, specific sensor locations are proven to be more informative than others in this random, unsteady flow, and we show that optimal sensor placement can be quickly determined using mutual information alone. It reveals that two pressure sensors positioned near the leading edge, one on each side of the airfoil, capture most of the information needed to predict lift. The lift can be predicted with sparse sensors because surface pressures are strongly correlated across the airfoil, with large-scale flow structures dominating the forces.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158900</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Visual Intelligence from Photons to Action</title>
<link>https://hdl.handle.net/1721.1/158899</link>
<description>Designing Visual Intelligence from Photons to Action
Young, Aaron
For embodied agents to perceive and effectively act within their environment, they must sense the world around them and translate this information into meaningful and safe actions; a process fundamental to both biological and human-engineered systems. Nature has evolved highly attuned visual systems, resulting in diverse and efficient eyes capable of facilitating complex behaviors. Conversely, roboticists have engineered sophisticated cameras and sensors, enabling robots to perform tasks beyond the capabilities of natural systems. This thesis explores the design of visual intelligence by integrating insights from both biology and engineering in two complementary parts. In Part I, we computationally recreate the evolution of vision within simulated embodied agents. By evolving the physical and neural aspects of vision in simulation - and training these visually-capable agents with deep reinforcement learning - we demonstrate that task-specific environmental pressures lead to distinct eye morphologies and behaviors, mirroring observations in biological evolution. This in silico approach enables us to investigate the fundamental principles underlying the emergence of animal eyes and provides a framework for exploring novel sensor designs subject to both biological (e.g., survival) and engineering constraints (e.g., manufacturability). In Part II, we leverage visual cues not typically used in nature (i.e., active illumination and multi-bounce light) to demonstrate enhanced robotic navigation via non-line-of-sight imaging. Using single-photon LiDARs, we capture the temporal propagation of individual photons, enabling the detection of objects around corners. This sensing capability allows us to develop robots that effectively anticipate and avoid hidden obstacles, reducing navigation time by 50% and overall trajectory length by 33%. Together, these works demonstrate how the synthesis of biologically-inspired design principles with advanced sensing modalities can enhance embodied agents' capabilities, while providing insights into both natural vision evolution and robotic perception.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158899</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Archean origin of assimilatory sulfate metabolisms provides novel insight into redox conditions of early Earth environments</title>
<link>https://hdl.handle.net/1721.1/158898</link>
<description>The Archean origin of assimilatory sulfate metabolisms provides novel insight into redox conditions of early Earth environments
Payette, Jack G.
Dissimilatory sulfur metabolisms recording differing biological isotopic fractionation are well studied, important components of sulfur cycling (Mateos et al., 2023). Assimilatory sulfur metabolisms and genes across life provide a complementary window into sulfur biogeochemistry with individual pathways having specific isotopic fractionations acting on distinct redox states (e.g. sulfate, sulfide, sulfite) for anabolism (Liu et al., 2012). An assimilation pathway exists, which starts with sulfate adenylyltransferase (sat/ATP sulfurylase) catalyzing a reaction of adenosine triphosphate (ATP) and sulfate (SO42-) resulting in adenosine 5’-phosphosulfate (APS), and incorporation of more reduced sulfur into biomolecules. This sat/ATP sulfurylase enzyme represents the first step required by life to incorporate sulfate and informs our understanding of biological processes performing this fundamental chemical reaction. A phylogenetic and molecular clock analysis of the sat/ATP sulfurylase protein family (E.C. 2.7.7.4) was performed to determine the age of sulfate assimilation proteins. Extant diversity of sat proteins was estimated to have a last common ancestor ~3.24 Ga (95% CI 3.52–3.06 Ga) using relaxed molecular clocks calibrated with eukaryotic and cyanobacteria age ranges from previously published fossil calibrated investigations. These results suggest sulfate cycling in Paleoarchean environments, despite extensive evidence of low marine sulfate concentrations (Crowe &amp; Canfield et al., 2014). Archean sulfate biogeochemical cycling could result from microbial sulfur oxidation and sources could include abiotic oxidation of volcanic sulfur, hydrothermal processes or pyrite (Canfield, 2001, Lyons et al., 2024). This phylogenomic evidence of sulfate during Archean times provides an independent complement to geochemical records and indicates that sulfur redox chemistry during the Archean was likely more complex than previously described.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158898</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Atmospheric and Oceanic Drivers of Atlantic Multidecadal Variability and Predictability</title>
<link>https://hdl.handle.net/1721.1/158897</link>
<description>Investigating the Atmospheric and Oceanic Drivers of Atlantic Multidecadal Variability and Predictability
Liu, Glenn Yu-zu
Despite its numerous impacts across the Earth system, the relative importance of ocean and atmospheric dynamics in generating Atlantic Multidecadal Variability (AMV) remains an open question. This thesis presents three pathways to understanding how oceanic and atmospheric processes generate key spatio-temporal signatures of AMV through a combination of processed-based and data-driven approaches. Part 1 (Chapter 2) takes a "bottom up" approach, building a hierarchy of stochastic models to identify the contributions of vertical entrainment and seasonality in local upper-ocean processes to sea surface temperature (SST) variability. Through this hierarchy, I highlight unrealistic features present in slab ocean models widely used to isolate atmospheric contributions to AMV. On the opposite end of the spectrum, Part 2 (Chapter 3) utilizes a "top-down" data-driven approach where deep neural networks are trained to predict the North Atlantic SST Index in both the Community Earth System Model 1 Large Ensemble (CESM1) and observation-based datasets using atmospheric and oceanic predictors. I apply explainable artificial intelligence techniques to highlight a significant source of multidecadal predictability over the Transition Zone in oceanic predictors such as sea surface salinity (SSS) and sea surface height in the presence of external forcings. Part 3 (Chapter 4) returns to the process-based hierarchy, but applies this to understanding SSS variability. The stochastic salinity model is used to investigate the role of mixed-layer re-emergence, subsurface ocean damping and SST-evaporation feedback in shaping the pattern and amplitude of AMV.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158897</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Falling isn't the End: Reimagining Demolition as a Creative Practice</title>
<link>https://hdl.handle.net/1721.1/158896</link>
<description>Falling isn't the End: Reimagining Demolition as a Creative Practice
Lee, So Jung
This thesis investigates resilience not as an endpoint but as a condition of continuous transformation. It critiques the shortcomings of current architectural discourse in addressing climate disasters, waste, and carbon footprints. While these crises are widely acknowledged, architecture often operates within restrictive economic, legal, and cultural systems, relegating resilient design to the periphery or diminishing its potential impact.&#13;
Collapse, traditionally perceived as failure, is reimagined here as a generative moment—an opportunity to rethink materials, systems, and the narratives that shape them. Central to this exploration is the concept of assembly, where materials are designed with deliberate life spans—some transient, others enduring. By anticipating the gaps and shifts that arise when permanence is no longer assumed, this thesis proposes new possibilities for adaptive design and architectural resilience within the evolving rhythms of life.&#13;
To articulate these ideas, the thesis employs speculative scenarios and temporal media. These tools position architecture as a system in flux, evolving in tandem with societal and environmental changes. Through narrative-driven methodologies, this work seeks to expand architectural discourse, prompting reflection on the discipline’s foundational assumptions while connecting it to broader cultural and systemic challenges.&#13;
Ultimately, this thesis redefines resilience—not as resistance or mere survival but as a dynamic and imaginative practice. It advocates for architecture’s leadership within the broader zeitgeist of sustainability, transforming pressing global challenges into opportunities for creative agency and systemic reinvention.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158896</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>American (Ise): On the Lifecycle of Stadiums in the United States</title>
<link>https://hdl.handle.net/1721.1/158895</link>
<description>American (Ise): On the Lifecycle of Stadiums in the United States
Wang-Xu, Mackinley
When the Kingdome in Seattle was completed in 1976, it was celebrated as a marvel of modern engineering, expected to last for centuries. Yet, in an ironic twist, it was demolished by implosion in 2000, surviving only twenty-four years. The Kingdome epitomizes the issue of short lifespans that has plagued American stadiums since the post-war era. A broad survey of these structures reveals an average lifespan of just three decades—a startlingly brief tenure for buildings of their scale and significance. These stadiums also follow a distinctive model of renewal. Similar to the Shikinen Sengu ritual at the Ise Shrine, a new stadium is often constructed adjacent to its predecessor. However, unlike Ise, where materials from the old shrine are reused and disseminated throughout Japan’s network of shrines, old stadiums are almost always demolished and discarded. This thesis seeks to superimpose Ise as a model onto American stadiums, envisioning an architecture that embraces both impermanence and longevity through circularity. Investigations into the barriers to circularity specific to stadiums serve as the foundation for design proposals, spanning scales from the detail to the site. The project ultimately imagines a stadium in a constant process of disassembly and renewal, where its spatial and programmatic potential challenge paradigms of completeness. In the context of a climate crisis demanding waste reduction, and for a typology notorious for its excess, stadiums can learn to do more with less.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158895</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Information Sharing for Satellite Navigation and Coordination</title>
<link>https://hdl.handle.net/1721.1/158894</link>
<description>Leveraging Information Sharing for Satellite Navigation and Coordination
Dolan, Sydney
As the number of objects in orbit grows, so does the risk of collisions. The sheer volume of collision warning messages far exceeds the capacity of human analysts, placing a significant burden on satellite operators and underscoring the need for autonomous, decentralized traffic management. Unlike centralized conjunction analysis, decentralized space traffic management distributes coordination across multiple independent nodes, allowing satellites to collaborate directly. This approach could enhance the resilience, speed, and international cooperation of space operations, helping to manage the space environment. For decentralized space traffic management to be viable, satellites must possess an accurate understanding of both the locations and intentions of other satellites. While satellites have precise knowledge of their own state, this accuracy diminishes when predicting the state of others. This gap is due to the limitations of onboard measurement systems and knowledge of each satellite’s structure, configuration, and maneuverability. Such differences motivate the exploration of information sharing between operators to improve coordination. Sharing information could benefit both individual operators and the broader space community by enabling more accurate trajectory predictions, facilitating formal maneuver negotiations, and enhancing overall orbital safety and efficiency. The main contribution of this thesis is to develop methods for autonomous satellite decision-making. By advancing the state of satellite autonomy, we can enhance high-level decision-making processes, enabling more adaptive and intelligent satellite coordination. This thesis begins by developing a multi-agent reinforcement learning environment to simulate satellite interactions in complex, high-dimensional settings. Then, we relax the assumption on synchronous communications and explore an alternate learning framework that relies on asynchronous communication between satellites. Our final contribution lies in a game-theoretic model of operator behavior in non-cooperative settings. Space is a competitive environment, and willingness to collaborate is mixed. As a result, we use game theory to obtain strategies to determine maneuvering and timing.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158894</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Max-Stable Processes,  Measure Transport &amp; Conditional Sampling</title>
<link>https://hdl.handle.net/1721.1/158893</link>
<description>Max-Stable Processes,  Measure Transport &amp; Conditional Sampling
Konomis, Dimitris C.
The modeling of extremes, known as extreme value theory (EVT), aims to understand events characterized by extreme deviations from the mean of a probability distribution. These events are significant in fields such as finance, environmental science, engineering, and insurance. EVT aims to predict the occurrence and impact of these events, which often have severe consequences. Applications of EVT include modeling extreme market movements in finance, natural disasters in environmental sciences, structural reliability in engineering, and catastrophic event risk management in insurance. Conditional sampling and simulation methods, such as normalizing flows and measure transport, are crucial for estimating extremes at un-monitored sites or under specific conditions, thereby improving our understanding and risk management strategies. The goal of this thesis is to make significant contributions to both extreme value theory and measure transport, as well as to establish a link between the two. First, we develop new Markov chain Monte Carlo algorithms for conditional sampling of max-stable processes. Next, we create models that incorporate physical laws, encoded by partial differential equations, to extend max-stable processes into regions without observations. Third, we design specialized transport map frameworks for distributions with bounded support, enabling accurate and efficient sampling and inference. Finally, we use transport maps parameterized by neural networks to learn and condition the distributions of shortest path statistics in polymer systems, accelerating the prediction of microstructural evolution under various conditions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158893</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical investigations of vortex dynamics: bursting, twist waves, and sensitivity analysis</title>
<link>https://hdl.handle.net/1721.1/158892</link>
<description>Numerical investigations of vortex dynamics: bursting, twist waves, and sensitivity analysis
Ji, Lingbo
Vortical structures are ubiquitous in real-world fluid flows, from the vortices generated by swimming fish to the wakes of aircraft and propellers. They form the backbone of high Reynolds number turbulent flows. Their dynamics are governed by non-linear processes, leading to a range of vortical instabilities that significantly influence engineering applications. Despite decades of research, many questions remain about core mechanisms responsible for the dynamic evolution of vortical structures due to the nonlinearity and complexity of flows at high Reynolds numbers. A particular scenario that lacks systematic investigation is vortices with initial core-size variations, which leads to the phenomena of twist wave propagation and vortex bursting. In this thesis, we first examine straight vortex tubes with initial core-size perturbations at high Reynolds numbers by performing high-fidelity numerical simulations. The differential rotation along the vortex tubes generates twist wave packets that propagate and collide, resulting in a sudden increase in the local core size – the phenomenon of bursting. We analyze the effects of perturbation amplitudes on the detailed evolution at each stage, including the underlying mechanisms for the growth and decay of the bursting structure. The bursting process is associated with significant energy dissipation, which is quantified and compared to that of unperturbed vortex tubes. Meanwhile, vortices in real fluid flows are often nonrectilinear and experience strain from environmental or self-induced effects. We extend our study to curved vortex tubes and investigate the impact of centerline non-rectilinearity on twist wave propagation and the stability of the bursting structure. Additionally, we adopt a relatively recent geometric perspective on vortical flows and analyze the helicity dynamics during the flow evolution. To systematically initialize vortex dynamics simulations based on a late-time or time-averaged flow metric, we explore different methods for sensitivity analysis of two-dimensional vortical flows. The sensitivity values obtained are then used in gradient-based optimizations, which shows promising pathways for control and optimization of vortical flow applications. Additionally, we present a numerical study of the locomotion of a rotating cylinder pair with periodic gaits in a low Reynolds number flow. We characterize the motion pattern and efficiency of the cylinder pair through a combination of theoretical arguments and numerical simulations, which provides a foundation for potential engineering applications at the microscale. Overall, our findings provide understanding of fundamental mechanisms for vortex bursting and associated twist wave dynamics at high Reynolds numbers, explorations of sensitivity analysis for vortical flow applications, along with insights into locomotion at low Reynolds numbers.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158892</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Insurance</title>
<link>https://hdl.handle.net/1721.1/158891</link>
<description>Building Insurance
Janson, Charles Perot
Over the past 350 years, the building insurance industry has been shaped by a series of major urban fires, each incrementally standardizing risk assessment and property valuation as financial products of risk management. In recent years, however, climate change has introduced unprecedented weather events that challenge the fine tuned models of insurance; in particular, the rise of wildfires in California and the Pacific Northwest have led to local withdrawal of insurance altogether. Within these contexts, the spatial conditions inherited by a highly insured past continually sustain separation, individual prosperity, and standard assemblies as inheritances of expansionist agendas. At this juncture of system failure, this thesis asks: how can architecture rethink more cooperative forms of building and living together that localize risk sharing, responsibility, and stewardship? While wildfire defense strategies put forth by insurance companies and building code armor stick-frame American single family home and its aesthetic traditions, this thesis proposes a new building typology entirely: a neighborly cooperative of adjoined homes. Under a single roof, property lines are transformed into sites of mutual stewardship, manifesting insurance no longer as an abstract response to risk, but as a series of social and spatial relationships between neighbors.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158891</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Chongqing Tiandi Project: An Asset Management Perspective</title>
<link>https://hdl.handle.net/1721.1/158890</link>
<description>Evaluating Chongqing Tiandi Project: An Asset Management Perspective
Yang, Junsi
This thesis uses the Chongqing Tiandi project as a case study to analyze the entire process of development and asset management for large-scale urban renewal projects in China's second-tier cities. It focuses on the motivations and outcomes of Shui On Land's transition from an asset-heavy to an asset-light model. Based on theoretical analysis (Chapter 2), corporate-level financial analysis (Chapter 3), and project-level in-depth studies and interviews (Chapter 4), the thesis explores the logic and impact of this strategic transformation from multiple perspectives. The theoretical analysis summarizes real estate lifecycle management theory, portfolio theory, and corporate strategic transformation theory, providing a framework to examine Shui On Land's strategic decisions. The financial analysis reveals that, from 2015 to 2017, Shui On Land faced significant financial pressure with high debt ratios and cash flow constraints, necessitating systematic asset disposals. While the company disposed of multiple assets during this period, Chongqing Tiandi's 79.2% equity disposal was particularly strategic due to its position as a high-risk, low-return asset within the company's portfolio. The project-level analysis and interviews demonstrate that replicating successful development models from first-tier cities in second-tier markets faces unique challenges. In Chongqing Tiandi's case, these challenges manifested in multiple ways: limited residential price premiums due to local land supply policies, substantial investment requirements for super high-rise developments exceeding $1 billion, and persistently low office rental rates in the local market. These factors compromised the project's financial self-sustainability and made it particularly vulnerable in Shui On's portfolio, especially when compared to projects in other second-tier cities like Wuhan. The development and subsequent equity sale of Chongqing Tiandi not only provided essential financial support for Shui On Land but also reflected a strategic decision to divest from a project where market conditions created both immediate challenges and future uncertainties. This research provides valuable references for the development of large-scale projects in China's second-tier cities, emphasizing the need for developers to utilize funds efficiently, adapt flexibly to market changes, and focus on achieving long-term value. These insights hold significant implications for sustainable development in complex market environments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158890</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooling Innovation and Circularity: Addressing Water Stress in the Age of AI-Driven Data Centers</title>
<link>https://hdl.handle.net/1721.1/158889</link>
<description>Cooling Innovation and Circularity: Addressing Water Stress in the Age of AI-Driven Data Centers
Kseibati, Reem
This thesis examines the growing demand for data centers and the critical challenges posed by their water and energy consumption. As artificial intelligence (AI) technologies expand, the infrastructure supporting these systems has become essential. The study highlights the projected increase in data center capacity driven by AI workloads and focuses on the impact in water-stressed regions across the United States. Given the resource-intensive nature of data centers, the research explores cooling technologies aimed at reducing environmental impact. Traditional air cooling is compared with innovative liquid and evaporative cooling techniques. Additionally, the thesis promotes circular economy principles, emphasizing resource efficiency, reuse, and regeneration as a pathway to sustainable operations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158889</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Living in Seoul: Addressing Housing Needs and Redefining Rental Market Trends</title>
<link>https://hdl.handle.net/1721.1/158888</link>
<description>Co-Living in Seoul: Addressing Housing Needs and Redefining Rental Market Trends
Park, Suhyeon
Co-living emerged as a novel asset class in the mid-2010s, addressing the housing needs of urban residents affected by rising housing costs, increasing urban migration, and the growing prevalence of single-person households. In South Korea, co-living has gained attention as a viable alternative to traditional housing, driven by unique local dynamics, including the decline of the dominant Jeonse system and a significant shortage of housing tailored to single-person households. With a growing preference for monthly rental systems over the Jeonse systems, both local conglomerates and start-ups have capitalized on the opportunity to offer company-operated co-living spaces. As the market grows, major international investors and global co-living providers have also entered, reflecting a unique market environment where institutionalized housing options are expanding alongside a notable shift in rental transaction systems. In this new era of urban housing, co-living is rapidly expanding and gaining popularity. This thesis seeks to answer the following question: What factors have driven the emergence and growth of the co-living market in Seoul, and what is its growth potential? To address this, it starts with an analysis of market drivers, provider strategies, and regulatory developments, followed by projections of market potential and an assessment of potential threats and mitigation strategies for long-term viability of co-living in Seoul. The goal is to offer insights for co-living providers to optimize their spaces and services. The findings suggest that while co-living addresses unmet housing demand, its long-term success depends on balancing operational efficiency with tenant satisfaction. While these strategies are applicable in other cities, they are particularly critical in Seoul, where the Jeonse system remains a strong and historically preferred alternative. In Seoul, co-living serves a dual mission: introducing an innovative housing model and reshaping the paradigm of the Wolse rental housing system. To succeed, co-living operators must clearly articulate their unique value proposition, addressing both the housing needs of urban residents and the broader evolution of the rental market.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158888</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the nature and measurement of variational bias: a developmental perspective</title>
<link>https://hdl.handle.net/1721.1/158887</link>
<description>On the nature and measurement of variational bias: a developmental perspective
Cai, Haoran
Natural selection cannot work with imaginary phenotypes, only those realized by developmental systems. The observed diversity of life on Earth occupies only a subset of conceivable forms in the absence of selection. This is because of the non-linear and discrete nature of genotype-to-phenotype maps as an outcome of the developmental system. Despite that, it is widely accepted in population and quantitative genetic modelings that the phenotypic production from random mutations is isotropic and uniform. Conventional methods linking genetic variants and phenotypic variation often assume that the origin of phenotypic variation is purely due to genetic and environmental factors. Here, in this thesis, I adopt a developmental causation view which proposes that patterns of variation may emerge as an inherent consequence guided by physico-chemical principles and that part of the nature can not be fully reducible to genetic factors. The distribution of phenotypic variants that arise from genetic and environmental variation is influenced by the developmental processes that transform the embryonic phenotype into the adult form. This developmental process is subject to constraints that stem from the structure, character, composition, or dynamics of development. We term such a constraint as developmental bias. Despite the prevalence of developmental bias, detecting and testing its role remains a challenge. To address this gap, in the thesis, I propose frameworks and showcase examples aimed at identifying developmental bias and testing its implications in shaping phenotypic evolution. Specifically, I answer three questions: (1) How does the central conponent of nonlinear genotype-to-phentype map --- transcriptional regulation --- bias the analyses of gene-gene interactions? (2) How to disentangle the contribution of developmental bias in trait-trait interdependencies? (3) How expression variability affects gene retention and gene expression evolution following gene and genome duplication.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158887</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ending Well, Making the Harvest-Paths of Our Values</title>
<link>https://hdl.handle.net/1721.1/158886</link>
<description>Ending Well, Making the Harvest-Paths of Our Values
Kpodo, Courage Dzidula Kwaku
Any single story shrinks all others. In a place historically cultivated for the cocoa cash crop, this thesis proposes reorienting architectural practice towards a plural valuing of land and its constituent spirits. The journey begins in 2022 with my acquisition of a 99-year lease for a 5-acre land in Ghana. Prior to the conception of an academic proposal, this was to preserve and grow ecological and financial value through time.&#13;
Located on a hill-cluster in the Eastern Region, this place is crucial as the birthplace of Ghana’s cocoa industry, which became the world’s largest exporter by 1911. Spurred by economic and colonial incentives, farmer-settlers acquired and cultivated forest land including the one I presently steward. They forged communities that live on despite a subsequent decline of cocoa production in the region. Five centuries of colonial influence in West Africa reduced a plural landscape into singular extractive narratives, creating place-names like the Gold Coast, renamed Ghana after independence. The capitalist framework of monocultural extraction, one reliant on a colonial government and its land survey department, continues under contemporary African states. Architecture and planning—a practice historically tied to power and capital—remains instrumental in this system, often overlooking other ways of valuing land.&#13;
This thesis confronts the dispositions of an inherited profession by foregrounding the practices and materials of a socio-cultural paradigm. It is epitomized by the tree called Newbouldia laevis (African boundary tree) and its plural meanings in West Africa. It follows a cocoa harvest-path from a community named after a farmer-settler, Yaa-Aso, and ascends the hills, crossing the land limits of 7 farmers. It ends on the land I hold, with a lease ending in CE 2122.&#13;
In July 2024, I led a convocation of the farmers along the path in the defunct cocoa distribution building, toward framing futures based on other values apart from capital. 3 languages were spoken in that gathering - Twi, Anlo-Eʋe and English. It resulted in a 7-foot expansion of the path, and the pacification of a seasonal spirit-stream that crosses it. They set the context for imagining a series of 5 moments, herein recorded, that explore a value system of things spiritual and communal, offered by the transgressions of a widened path and the land I hold at its end.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158886</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems-Theoretic Approach to Design of Early Concepts for Novel, Complex Systems in Aerospace</title>
<link>https://hdl.handle.net/1721.1/158885</link>
<description>A Systems-Theoretic Approach to Design of Early Concepts for Novel, Complex Systems in Aerospace
Hillman, Alexander P.
The complexity of engineered systems has grown exponentially over the last forty years. One of the main challenges in modern engineering is managing this complexity, particularly as the pace of technological change continues to accelerate across industries. Traditional approaches to generating early concepts for novel, aerospace control-oriented systems typically employ a design-first approach, ignoring critical steps required to truly understand the intent and context of a new system. This tendency also leads to a focus on low-level, highly granular design activities that seek to integrate advanced technologies together for technology’s sake. Unfortunately, today’s applied early concept generation methods do not facilitate the effective generation of early system concepts and an initial high-level design for aerospace control systems. To address these shortcomings, this thesis proposes a systematic and rigorous framework to generate early system concepts using Systems-Theoretic Accident Model and Processes (STAMP) principles and a new lens to examine system intent for a novel, complex system. This work also introduces a new level of abstraction for a portfolio-of-systems context and a method to propose an initial design artifact for new systems that is both architecture-agnostic and relevant during the earliest system engineering lifecycle activities. This method, Systems-Theoretic Concept Design, uses a top-down, three-phased approach to conduct mission analysis and determine the intent for a new system within a specific portfolio-level context. Upon building this intent model, the method enables the synchronization of stakeholder mental models through the use of transformation models built using the principles of Systems Theory. Finally, in its last phase, this early design concept generation framework delivers an initial design artifact that is technology- and requirements-agnostic in the language of Systems Theory using the semantics of STAMP. This initial design artifact is in the form of the Portfolio-of-Systems control structure, a control structure that frames a portfolio’s desired high-level capability as a control problem at a new level of abstraction while enabling analysis and examination of complex interactions across systems that may operate asynchronously or in geographically separated operating environments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158885</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational-Experimental Process Development for Laser Powder Bed Fusion Additive Manufacturing</title>
<link>https://hdl.handle.net/1721.1/158884</link>
<description>Computational-Experimental Process Development for Laser Powder Bed Fusion Additive Manufacturing
Weißbach, Reimar
Laser powder bed fusion (LPBF) additive manufacturing (AM) is instrumental for advances in high-value industries such as aerospace and medical devices. However, widespread adoption is still held back, in part due to challenges with powder handling, identification of process parameters, part qualification and quality control, and low build rates that lead to high part costs. This thesis presents workflows, tools, and understanding for practitioners and researchers seeking to address these challenges, in particular (i) powder spreading, (ii) parameter selection, and (iii) build rate improvement. Cohesive powders (D50 ≤ 20 &#120583;&#119898;) are challenging to spread and therefore not commonly used in LPBF, but promise more stable melting conditions during laser melting and potentially allow for finer geometrical resolution. Various spreading strategies are explored using an integrated discrete element-finite element (DEM-FEM) framework and a schematic process window for counter-rotational roller spreading is proposed. A new strategy of spreading with a transversely oscillating tool is chosen for experimental implementation and validated using a custom-built mechanized powder spreading testbed. Powder layers are analyzed using X-ray transmission imaging and layer quality is statistically correlated to kinematic spreading parameters. A methodology for performing melt track experiments using high-precision metal templates as well as a machine learning-based automated image analysis tool is presented and applied to melt track scaling studies. Based on single track parameter studies with layer thicknesses and laser spot sizes of up to 600 &#120583;&#119898;, a dimensionless LPBF process window using the normalized enthalpy Δ&#119867; / ℎₛ as well as the Fourier number is developed. A workflow for rapid LPBF build parameter selection is proposed, that is shown to fabricate near-full dense parts (up to 99.99 %) on the first attempt. Build rate scaling analysis reveals the trade-off between laser spot size and laser scan speed given laser power limitations. Further, LPBF with a standard powder (15 − 45 &#120583;&#119898;) is compared to a fine powder (0 − 25 &#120583;&#119898;) under similar processing conditions. The fine powder exhibits superior melt track stability and continuity, as well as significantly increased melt track cross-sectional area, allowing build rate to be increased by almost 20 %. Finally, to enable better understanding of the underlying thermo-fluid dynamics of the melt pool, an approach for computational model parameter estimation using Bayesian inference is presented and applied to the important model parameter of laser absorptivity. This is within the context of a Smooth Particle Hydrodynamics (SPH) computational melt pool model, developed collaboratively by researchers at the Technical University of Munich. The diffuse interface approach employed in SPH is validated using a discretization refinement study, showing the sensitivity of physical phenomena characteristic for LPBF, such as the vapor-induced recoil pressure, to computational hyper-parameters. These combined contributions enhance both practical implementation and theoretical understanding of LPBF, ultimately advancing the field of additive manufacturing towards more cost-effective and higher quality LPBF processes.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158884</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Thermo-Chemo-Mechanics Framework for the Large-Scale Simulation of Material and Structural Failure in Hypersonic Environments</title>
<link>https://hdl.handle.net/1721.1/158883</link>
<description>A Computational Thermo-Chemo-Mechanics Framework for the Large-Scale Simulation of Material and Structural Failure in Hypersonic Environments
Pickard, Daniel N.
Materials and structures subjected to the extreme conditions of hypersonic flight undergo complex degradation and fracture processes. This thesis presents a theoretical formulation and a computational framework that enables the large-scale simulation of thermochemically fracturing solids exhibiting complex post-fracture interface response. The continuum theory is based on a general thermodynamically-consistent description of the coupled multiphysics problem, and the numerical formulation extends the scalable discontinuous Galerkin(DG)/Cohesive Zone Modeling paradigm to thermo-chemo-fracture mechanics. The approach is distinguished by its unified DG treatment of the coupled problems, which facilitates the analysis of fracture propagation, fracture-dependent heat and mass transfer as well as thermally-activated solid-phase chemical reactions. The framework is verified against two analytical solutions of boundary value problems drawn from thermo-poro-elasticity and thermally-driven delamination. Three-dimensional simulations of a benchmark thermochemically-driven fracture problem illustrate the parallel scalability of the fully-coupled computational framework. We utilize this framework to render models of passive oxidation-induced fracture in ultrahigh temperature ceramics computationally tractable. First, a rigorous constitutive theory is shown to capture the molecular diffusion of oxidant through the reaction product layer using only fundamental transport properties, i.e. without the need for calibration to reaction experiments. The physical processes observed on the diminutive scale of an oxide layer are explicitly resolved, but the approach is limited to microscale analyses by scale separation. We sidestep this limitation by specializing the general theory under specific phenomenological assumptions, thereby yielding a practical model that can reproduce oxidation experiments. We use this specialized model to analyze oxidation-induced swelling, fracture and delamination in SiC/coating systems, and unveil the coupled thermochemical response as well as fracture morphologies in the vicinity of critical flaws. Then, we conduct a parametric study of three-dimensional coatings that exposes the channeling mechanisms above penny-shaped delaminations of various sizes. The computational analyses identify a transition from decussating to circumferential channel cracking that explains the wide variety of surface channel cracks observed in experiment. The physical mechanisms and fracture morphology regimes are corroborated by a simple structural theory. Finally, cohesive fracture models, splitting methods and thermal solvers are developed specifically for applications to thermally shocked ceramics. Simple and rigorous calibration procedures are proposed which facilitate the direct analysis of fragmentation and comminution in brittle solids subjected to extreme advective heat transfer. The presented examples serve as evidence that the framework can successfully enable three-dimensional, thermochemically-coupled fracture analyses of unprecedented physical fidelity, which furnish new insights into complex hypersonic thermal protection system response.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158883</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the dynamics and interparticle forces of electrostatically stabilized colloidal suspensions</title>
<link>https://hdl.handle.net/1721.1/158882</link>
<description>On the dynamics and interparticle forces of electrostatically stabilized colloidal suspensions
Krucker-Velasquez, Emily
In a broad spectrum of industrial and biomedical applications, the equilibrium and dynamic properties of colloidal suspensions play a pivotal role, with systems ranging from simple gold nanoparticles in electrolyte solutions to complex assemblies like micelles, vesicles, nanocapsules, and dendritic polymers. Typically, these systems are approached through the Derjaguin–Landau–Verwey–Overbeek (DLVO) theory and Poisson-Boltzmann models, frameworks that approximate charged particles as point charges to predict interparticle interactions. While these frameworks have been instrumental for low-concentration, idealized systems, it falls short in capturing critical behaviors in more concentrated regimes. In such environments, overlooked phenomena—such as excluded volume effects and ion-ion correlations—become essential in shaping the colloidal system’s equilibrium and dynamics. By leveraging advanced computational techniques, we systematically interrogate these mesoscale interactions, offering insights that extend beyond the traditional paradigms of mean-field theory and enhance our understanding of colloidal behavior in complex environments. The first part of this work presents the development of efficient algorithms that significantly advance the computational speed of induced polarization calculations within Brownian Dynamics simulations of polarizable colloidal particles. By establishing a new benchmark in simulation methodologies, these algorithms lay the groundwork for exploring complex soft matter systems, enabling deeper insights into the dynamic and equilibrium properties of colloidal suspensions beyond the limitations of conventional theories. Together, these advancements provide a robust computational framework for examining mesoscale interactions in concentrated colloidal systems, where ion correlations, finite ion volumes, and thermal fluctuations critically influence behavior. The next part of this work focuses on the study of equilibrium properties of charged soft matter systems in crowded environments through the implementation of robust computational techniques. We meticulously examine charge-density correlations and clustering behaviors that arise due to the complex electrostatic interactions between colloidal particles. At high ion concentrations, the system undergoes distinct structural transitions that are modulated by the ionic strength and specific particle characteristics. These transitions are characterized by emergent patterns in the spatial distribution of charges, forming structured clusters that reflect the balance between electrostatic and entropic forces. We further our studies by computing the potential of mean force (PMF) between metallic nanoparticles, a measure of the effective interaction potential that inherently captures how particles interact across various separation distances in an electrolyte. The PMF analysis reveals oscillatory behavior in particle interactions at different concentrations. Our study delivers robust free energy profiles, enabling a more nuanced understanding of the electrostatic forces at play in dense colloidal suspensions. These insights shed light on the mechanisms of charge screening and packing within high-density systems. The final part of this thesis focuses on the study of the non-linear transport properties of concentrated macroions to external electric fields, revealing intricate dependencies on both ionic structure and external electric fields. Our studies reveal how conductivity is modulated by charge density correlations and field strength. A notable disruption of local ionic atmospheres was observed with increasing field strengths, which in turn accelerates ion mobility and significantly alters the transport properties. We further advance the investigation into the dynamic response of concentrated macroions and electrolytes by examining their behavior under time-varying electric fields. Through simulations involving frequency sweeps and chirp signals, we discerned that the dynamic response of these concentrated charged soft matter systems is best understood through the lens of two distinct transport regimes—characterized by short- and long-time responses. This bifurcation enables the introduction of a relaxation time scale that captures the intricate coupling between ionic correlations and the macroscopic system response, highlighting the pivotal role of excluded volume effects in densely populated environments. The study provides a detailed framework for manipulating ion transport in concentrated electrolytes and macroions, paving the way for innovations in fields reliant on precise control of electrostatic conditions and ionic mobility.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158882</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Sustainable Recommender Systems</title>
<link>https://hdl.handle.net/1721.1/158881</link>
<description>Designing Sustainable Recommender Systems
Huang, Lei
Recommender systems are widely deployed to serve users with content they like. However, content must be created and insufficient demand dampens a creator’s production incentive. We argue that the canonical recommender system may not be sustainable if, by promoting the content each user likes the most, it suppresses the creation incentive of the less popular but still valuable content. We propose a “sustainable recommender system” solution – subsidize creators with demand according to their “sensitivity,” which measures how easily a creator can be incentivized by demand, and their “contribution,” which measures how important a creator is to users overall. Theoretically, we prove that this algorithm maximizes long-term user utility by internalizing the externality of user choice on other users. Computationally, our main innovation is to estimate creator contribution using computer vision, where we train a deep-learning model to compute how creator distribution affects system-wide user utility. Analyzing data from a large content platform, we show that our algorithm incentivizes valuable creators and sustains long-term user experience.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158881</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Detection and Localization of Pressure Transients in Water Distribution Systems</title>
<link>https://hdl.handle.net/1721.1/158880</link>
<description>Detection and Localization of Pressure Transients in Water Distribution Systems
Liu, Shiqing
Water distribution systems are critical to urban water supply, but as they age they become increasingly vulnerable to bursts and leaks, leading to significant economic, social, and environmental consequences. The complexity and inaccessibility of underground pipelines present substantial challenges for their maintenance. As a result, the development of real-time monitoring systems for these systems is essential to reduce water waste and minimize adverse impacts to consumers and surrounding infrastructures. This thesis investigates the effectiveness of continuous pressure monitoring systems in detecting pipe bursts and transient events within water distribution systems. Using PTSNet, a parallel transient simulation Python package, we simulate pipe burst events at each node in a real-world system and examine the pressure-time response at all other nodes. By adding Gaussian noise to the simulation results to mimic real-world background noise, we assess the detection success of pressure signals at each node using a modified CUSUM algorithm. The correlations between detection success and three spatial metrics between the source and sensor are calculated. We show that a spatial metric, the effective number of magnitude-changing junctions along the fastest path, (NJFP), has a stronger correlation with detection success than the shortest travel path or the shortest distance. By comparing detection performance for networks with differing topologies (gridded, looped, and branched) and pipe characteristics, we discover that multiple shortest paths (MSP; where pressure waves from different paths arrive almost simultaneously at the sensor) amplify the signal due to transient interference phenomenon and enhance the detectability of transients. This effect is particularly pronounced in gridded networks. We investigate the capabilities of monitoring, from a network of fixed stations, to achieve unique localization of pressure transient events using a time-reversal back-propagation algorithm. This algorithm identifies the event source by matching the theoretical and detected arrival time differences at the sensors. A novel time differences space is constructed, representing the independent shortest time differences from locations along all the pipes to the sensors, based on network information and sensor locations. Pipe sections with unique shortest time differences are identified as uniquely localizable pipes. Effective-NJFP-based probabilities of transient detection with accurate arrival times (error &lt; 0.1s) are derived from these simulation results. The localization performance of the sensor network is evaluated by the probability-weighted total lengths of the pipes that can be uniquely localized.&#13;
We consider sensor placement strategies aimed at maximizing the detection and localization performance of pressure monitoring sensor networks. Detection performance is defined as the total weighted pipe lengths in the network, where the weight of each pipe corresponds to its detection probability. Two problems are addressed: In order to maximize transient event detection performance when only a limited number of sensors are available, we formulate a mixed-integer programming (MIP) optimization model and employ a genetic algorithm to find solutions. The second problem involves determining the minimum number of sensors and their optimal locations to detect transient events across the entire network without a constraint on the number of available sensors. This is formulated as a minimum set cover problem, and an optimal solution is obtained using a mixed-integer linear programming solver. We focus on maximizing transient localization performance with a limited number of sensors. A genetic algorithm is applied to determine sensor locations, and the solutions obtained by this method provide significantly better localization performance than other approaches. We show differences in sensor placements for detection and localization: sensors are more evenly distributed throughout the network for detection purposes, while for localization, they are more concentrated in areas with longer pipes and simpler network structures. Finally, we present an analysis of two pressure monitoring datasets collected from a real-world water distribution system (SLG network). The first dataset consists of data from 28 sensors with a 100 Hz sampling frequency, collected over 7 to 30 days. We propose a method to identify and analyze noise levels and distributions at each sensor. Using a modified CUSUM algorithm, we detect transients and correlate them across sensors to identify events detected by multiple sensors. A transient-magnitude-based clustering method is then employed to group events based on their magnitudes, followed by a localization approach that utilizes the arrival time differences of transients between sensors. The findings indicate that noise levels in real-world monitoring data vary both spatially and temporally and are not independently normally distributed. Additionally, the arrival times detected by the modified CUSUM algorithm may not always accurately reflect the true transient arrival times due to mismatches between the signal characteristics and tuning of model parameters. Accurately identification of transient arrival time is particularly challenging for slowly changing pressure wave fronts. The second dataset includes pressure monitoring data from 7 sensors, during which 14 active transients with known source locations, times, and magnitudes were generated. We apply the modified CUSUM algorithm to detect transients at the sensors and correlate detection success with spatial metrics. The analysis confirms that the effective NJFP has the highest correlation with detection success, consistent with the simulation results. Additionally, the transient magnitude ratios between sensors and the source are found to be similar to the ratios calculated based on theoretical transmission coefficients when the source and sensor are in close proximity, suggesting that transmission coefficients can be used to estimate transient magnitudes in real networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158880</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observations of Surfzone Vorticity Using Optical Remote Sensing</title>
<link>https://hdl.handle.net/1721.1/158879</link>
<description>Observations of Surfzone Vorticity Using Optical Remote Sensing
Dooley, Ciara Jaya
The surfzone is the dynamic interface between the land and ocean, where waves shoal and break as they reach shallow water near the shore. Currents and circulation patterns in the surfzone transport sediment, nutrients, pollutants, and other materials along and across the coast, and can create hazardous conditions for swimmers (rip currents). However, understanding of the strength and structure of eddies and vortices in the flow field primarily remains limited to numerical models and theory. Here, novel observations of surfzone vorticity at small [O(10m)] and large [O(100m)] spatial scales are presented and related to incident wave conditions and the measured underlying bathymetry. Field experiments were conducted at a sandy beach on the Atlantic Ocean, and nearshore flows were observed using optical remote sensing (coastal imaging) and in situ sensors. Remote sensing algorithms are expanded from previous applications to estimate high spatial resolution two-dimensional surface flows by tracking the motion of naturally occurring foam throughout the surfzone. Estimated currents are correlated with in situ flow measurements, and errors increase as the sea-surface viewing angle becomes more oblique and image quality decreases. Large spatial-scale vorticity estimated using remotely sensed flows increases with alongshore bathymetric inhomogeneity, and complex circulation patterns corresponding to holes and channels in the seafloor persist for days at a time. Small spatial-scale vorticity estimated from a 5-m diameter ring of 14 current meters increases with the directional spread of the incident wave field, consistent with increased vorticity injection from the crest-ends of breaking waves. Small spatial-scale vorticity estimated using remotely sensed flows is spatially variable and correlated with the amount of wave breaking observed at a given location. Enhanced vorticity at large and small spatial scales occurs in the inner surfzone, and virtual drifters released into the remotely sensed flow fields demonstrate cross-shore variability in dispersion and mixing. This thesis expands the understanding of vorticity dynamics in the surfzone through unique field observations and provides new tools for coastal research and monitoring through development of remote sensing techniques.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158879</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Place: Unlocking Value for Investors by Integrating&#13;
Indigenous Values in Luxury Hospitality</title>
<link>https://hdl.handle.net/1721.1/158878</link>
<description>Empowering Place: Unlocking Value for Investors by Integrating&#13;
Indigenous Values in Luxury Hospitality
Peragallo, Nadra Alia
The luxury hospitality industry has long been attuned to shifting consumer preferences, particularly as travelers increasingly seek unique, meaningful experiences. In today’s global market, trends centered on personalization, wellness, authenticity, and regeneration—further accelerated in the post-pandemic travel era—present both challenges and opportunities for real estate investors. This shift raises a critical question: How and where can value be unlocked in this evolving landscape?&#13;
&#13;
This thesis explores how real estate investors can maximize value creation in the luxury hospitality sector by leveraging traditional performance metrics alongside a complementary&#13;
framework designed to uncover underexplored opportunities and enhance collaboration among stakeholder groups. Through the analysis of two case studies—Salterra Resort &amp; Spa in South&#13;
Caicos, Turks &amp; Caicos Islands, British West Indies, and Puntacana Resort and Club in the Dominican Republic—the study demonstrates the practical application of this framework in&#13;
tropical, coastal, and island regions, where the interaction between tourism, local communities, and fragile ecosystems is particularly pronounced. By showcasing its success, this research provides adaptable stakeholder rubrics and qualitative system dynamics causal loop diagrams as templates, while broadening the scope for innovation and inspiring further exploration of sustainable, value-driven approaches in luxury hospitality.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158878</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two's More Fun than One: How the Presence of Multiple Nutrients Changes Microbial Competition and Foraging in Unexpected Ways</title>
<link>https://hdl.handle.net/1721.1/158877</link>
<description>Two's More Fun than One: How the Presence of Multiple Nutrients Changes Microbial Competition and Foraging in Unexpected Ways
Bloxham, Blox Willow
Microbes exist in incredibly diverse environments with many possible resources (i.e. nutrients) to compete and forage for. To make this complex system tractable, ecologists often study microbes in the presence of a single resource in order to predict and explain what happens with multiple resources. But what gets lost when we do this? Are there phenomena that only emerge in the presence of multiple resources? Here, I explore the ecological implications of three phenomena that each require the presence of at least two resources. First, I show that the diauxic lags that occur when a microbe needs to switch between resources after one is depleted can allow ‘fast-switcher’ microbes to coexist with competitors that exclude them in single-resource environments. Then, I derive a rich temporal niche structure that arises from variations in the order in which resources are depleted in ecosystems with a pulsed resource supply and show that these temporal niches reshape community structure, vastly increasing the expected diversity of microbial ecosystems. Finally, I present a novel differential strategy in which a microbe attempting to intercept a moving source of multiple resources can treat one resource as an attractant and the other as a repellent to significantly increase its chances of successfully intercepting the source as compared to just being attracted to the resources released by the source. Each of these phenomena fundamentally requires the presence of at least two resources and reshapes microbial behavior and ecology. Thus, they collectively highlight the need to carefully consider how characterizations from single-resource environments actually combine to determine what happens in multi-resource environments and what new dynamics must be accounting for in such a bottom-up approach. I conclude with an argument that the case of two resources may be particularly relevant to study due to how much complexity can emerge at just the first step up from one resource to two.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158877</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Based Complex Terrain Navigation Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/158876</link>
<description>Learning-Based Complex Terrain Navigation Under Uncertainty
Cai, Xiaoyi
In complex off-road environments, accurately identifying traversable terrain is crucial for achieving fast and reliable navigation. Existing methods learn terrain properties directly from data via self-supervision to automatically penalize trajectories moving through undesirable terrain. However, challenges remain in properly quantifying and mitigating risk due to uncertainty in learned models and improving model generalization in novel environments. To address these challenges, this thesis presents a unified framework to learn uncertainty-aware, physics-informed traversability models and achieve risk-aware navigation in both indistribution and out-of-distribution terrain. First, the proposed method efficiently quantifies both aleatoric and epistemic uncertainty by learning discrete traversability distributions and probability densities of the traversability predictor’s latent features. Leveraging evidential deep learning, this work parameterizes Dirichlet distributions with network outputs and proposes a novel uncertainty-aware squared Earth Mover’s distance loss with a closed-form expression that enhances learning accuracy and navigation performance. Second, the proposed method achieves risk-aware navigation by simulating state trajectories with the worst-case expected traversability values to handle aleatoric uncertainty and by penalizing trajectories moving through novel terrain with high epistemic uncertainty. Third, the proposed method improves model generalization by embedding physics priors directly into the mathematical formulation of evidential neural networks and implicitly aligning learned models with physics models through a physics-informed training loss. Finally, through extensive simulation and real-world experiments on wheeled and quadruped robots, it is demonstrated that this work leads to faster navigation with higher success rates when compared to existing risk-aware approaches, even in environments with significant distribution shifts.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158876</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CO₂ Capture with Lithium Oxide in Molten Salt Media : A Case Study of CO₂ Capture via Electrochemically Produced Metal Oxide</title>
<link>https://hdl.handle.net/1721.1/158875</link>
<description>CO₂ Capture with Lithium Oxide in Molten Salt Media : A Case Study of CO₂ Capture via Electrochemically Produced Metal Oxide
Byun, Gi Hyun
As the unprecedented temperature rise originating from anthropogenic carbon dioxide (CO₂) emission intensifies, the development of post-combustion carbon capture technologies has been urged. Although its maturity, conventional thermal swing processes using aqueous amines, suffer from significant limitations, including high energy requirements and sorbent degradation. Electrochemical CO₂ capture technologies, which use electrical energy instead of thermal energy, have emerged as an energy efficient way to capture CO₂. This shift not only improves energy efficiency but also reduces reliance on fossil fuels, further contributing to reduction in CO₂ emissions. This work explored the potential of electrochemical metal oxide formation for CO₂ capture, a promising alternative to amine-based systems due to its exceptional sorbent (i.e., metal oxide) stability. Li₂O in eutectic mixture of potassium nitrate (KNO₃) and lithium nitrate (LiNO₃) was chosen as a case study due to the relatively well-understood chemistry of the system and the potential synergistic effects between metal oxide and the molten salt. Primarily, we investigated the synergistic effect of Li₂O in nitrate molten salt via thermal gravimetric analysis. Next, electrochemically produced Li₂O by reduction of oxygen gas was tested as a CO₂ sorbent while investigating parameters affecting its conversion to lithium carbonate (Li₂CO₃). Through this study, we suggested dissolution model as a crucial pathway for conversion. Lastly, we explored the effect of adding nitrite ion (NO₂⁻) to the molten salt. Irreversible side reaction between NO₂⁻ and CO₂ was confirmed with X-ray diffraction and NOₓ measurement. This thesis demonstrates the feasibility of electrochemical metal oxide-based CO₂ capture, highlighting some considerations in the capture step.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158875</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Methods to Improve Satellite Attitude Determination and Control with a Focus on Autonomy, Generalizability, and Underactuation</title>
<link>https://hdl.handle.net/1721.1/158874</link>
<description>Computational Methods to Improve Satellite Attitude Determination and Control with a Focus on Autonomy, Generalizability, and Underactuation
McKeen, Patrick
The attitude determination and control system (ADCS) onboard a satellite uses sensors to measure orientation and angular velocity, enabling the satellite to manage angular momentum, counteract disturbances, and point in the desired directions. Many historical ADCS approaches are designed for constant pointing goals, high accuracy sensors, powerful actuators, or larger, high-inertia satellites. Many modern satellites are small satellites (tens of kilograms or less), with lower-cost actuators and sensors, and may have more complicated attitude goals. This dissertation presents a variety of computational approaches to improve ADCS performance by leveraging detailed satellite dynamics modeling and estimation, disturbance inclusion, and trajectory planning–all optimized for efficient onboard computation suitable for small satellites. The proposed framework generalizes ADCS operations, allowing it to adapt automatically to different satellite types, mission requirements, and operational goals, reducing reliance on predefined ground-based commands. This framework can be used in place of standard control laws to make ADCS more autonomous and “hands-off,” calculating its own slews and desaturation while meeting pointing goals, even in cases of underactuation or large disturbances. This generalized and autonomous framework is a contribution of this work, alongside each of its components, which can be individually used in their own right. One key component of this work is a generalized state estimator that integrates a dynamic model of the spacecraft. This estimator demonstrates high accuracy across various satellite configurations, achieving angular error as low as 0.01◦ in low Earth orbit (LEO) with highquality sensors (but no star trackers), compared to the typical 1◦ error of conventional methods. The estimator can account for biases, sensor errors, and external disturbances, ensuring robust performance (e.g., 0.1◦ error in LEO) even with lower-quality sensors (MEMS gyroscopes, plus magnetometers and sun sensors). This adaptability highlights the increased autonomy of the system, as it requires minimal human intervention to maintain high accuracy across diverse mission scenarios. Another major contribution is the integration of disturbance modeling into control laws. By accounting for disturbances directly (either individually or as an all-in-one value tracked by the estimator), rather than through reactive measures like integral control, the proposed methods improve stability and performance, particularly for underactuated systems–improving pointing accuracy by up to 20 degrees. The developed control laws are adaptable to various actuator configurations, disturbance environments, and pointing objectives. This flexibility extends to modifying pointing goals, such as aligning specific vectors rather than requiring a fully specified orientation, enhancing mission adaptability. This work also implements a novel trajectory planning method that generates efficient pointing trajectories for both constant and time-varying goals. The method, based on the Augmented Lagrangian iterated-LQR (ALTRO) approach, creates sequential mission trajectories that optimize performance even under underactuation or disturbance conditions. The planned trajectories are followed by two types of robust closed-loop controllers, applicable across satellite architectures ranging from large weather satellites to 3U CubeSats. By enabling onboard trajectory planning and adaptive control adjustments, this method significantly reduces the need for ground-based planning and interventions, further advancing autonomous operation. The combined framework of estimation, disturbance-aware control, and trajectory planning achieves significantly higher accuracy than traditional ADCS approaches. This enables the use of commercial off-the-shelf components in high-performance missions, overcoming the limitations of low-cost sensors and actuators. The proposed methods allow satellites to operate with weaker or fewer actuators, such as magnetic-only control, while still achieving precise pointing, thereby expanding the feasibility of more autonomous, robust, and cost-effective satellite operations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158874</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>thesis in the field of Mechanical Engineering: Relevance for Human-Robot Collaboration: Definitions, Systems, Algorithms, and Applications</title>
<link>https://hdl.handle.net/1721.1/158873</link>
<description>thesis in the field of Mechanical Engineering: Relevance for Human-Robot Collaboration: Definitions, Systems, Algorithms, and Applications
Zhang, Xiaotong
Human-Robot Collaboration (HRC) combines the strengths of human and robotic capabilities to accomplish complex tasks, yielding significant impacts in various domains. To enable seamless interaction in dynamic and unpredictable environments, robots are required to efficiently and accurately perceive their surroundings, align reasoning with human cognition, anticipate key attributes, and generate safe, effective actions to support humans proactively. This thesis introduces relevance, a novel concept inspired by human cognition, to improve the efficiency, safety, and intelligence of HRC. Relevance enables robots to prioritize objects based on their importance to human goals, allowing them to concentrate computational resources on key elements. This focused approach reduces input space for essential algorithms, minimizes processing delays, and enhances safety and adaptability in dynamic environments, facilitating more natural and intuitive collaboration with humans. This thesis systematically explores the concept of relevance, introducing a hierarchical model for relevance quantification that combines scene understanding in cluttered environments with an event-based, multi-modality framework, enabling real-time relevance determination based on human objectives, preferences, spatial-temporal relationships, and constraints. A relevance-based perception strategy further directs models to prioritize key areas, reducing computational and inference times, while two new safety metrics—Critical Collision Probability (CCP) and Average Collision Probability (ACP)—quantify reduced collision risks in Human-Robot Collaboration (HRC). Additionally, a relevance-driven framework integrates relevance quantification with dynamic scene understanding and decision-making, achieving high human objective and relevance prediction accuracy. An advanced human intention prediction framework using head orientation, object affordance, and hand movement also enhances precision, accuracy, and F1 scores over baseline models. Results demonstrate that relevance quantification significantly reduces task planning time by 79.56% and inquiries by 80.84%, with a real-world coffee-serving demonstration highlighting its potential for proactive, autonomous assistance. Furthermore, the safe motion generation algorithm reduces collision incidents by 63.76% and collision frames by 44.74%, supporting accurate, safe robotic assistance in dynamic environments. The concept of relevance enhances the efficiency, safety, and intelligence of human-robot collaboration (HRC) within dynamic and unpredictable environments, supporting a deeper integration of robotics into diverse real-world applications. Its potential extends beyond HRC, with promising applicability in autonomous driving and other complex domains where adaptive, context-aware decision-making is essential.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158873</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of terrestrial organic carbon export and preservation in the marine environment</title>
<link>https://hdl.handle.net/1721.1/158872</link>
<description>Mechanisms of terrestrial organic carbon export and preservation in the marine environment
Boehman, Brenna L.
Export of terrestrial carbon from land to sea is a globally important carbon flux that is poorly constrained and has implication for atmospheric carbon levels over modern and geologic timescales. Many factors control the fate of exported carbon and the subsequent impact on carbon budgets, including the timescales of export, the composition of organic matter, and degradation processes. This thesis uses biomarkers, bulk geochemical tools, and incubation studies to interrogate the factors controlling terrestrial carbon export and preservation in the marine environment. The thesis focuses on two globally important river systems that collectively deliver 25% of the total terrestrial carbon flux to the ocean, the Ganges-Brahmaputra (G-B) Rivers and the Amazon River. The first two chapters focus on the G-B Rivers, utilizing compound specific biomarker analysis within a high sedimentation rate (30 cm/yr) terrestrial archive in the Bay of Bengal, we interrogate (i) timescales of organic carbon export from land to sea, and (ii) basin-scale geochemical responses to rice agriculture expansion. These analyses utilize the radiocarbon ages and stable carbon-13 isotopic composition of lipids produced by Archaea and Bacteria. We identify that ca. 75% of these biomarkers experience millennial scale storage in the G-B basin, in agreement with previously assessed plant-derived compounds, highlighting that an overarching soil stabilization mechanism controls the age of exported terrestrial organic matter. Individual biomarkers and bulk geochemical analysis chronicle the change in methane-derived soil carbon within the basin due to rice paddy expansion, highlighting that 49% of Bangladesh’s methane emissions from 1990-2008 have been abated by soil storage. The last two chapters focus on the Amazon River, to examine the fate of terrestrial organic carbon in the marine environment, (iii) utilizing geochemical analysis of historical sediments and sediments from a field campaign in 2023, and (iv) utilizing terrestrial and marine endmembers in incubation experiments simulating the dynamic coastal environment. Sediment geochemical and biomarker analyses highlight the preservation of an isotopically distinct terrestrial endmember in the coastal sediments, which has led to at least 50% underestimation of the burial efficiency. Quantitative stable isotope probing incubations using 13C-lignin indicate the dual role of microbially-mediated and photo-degradation, and highlight that the microbial communities primarily responsible for lignin degradation in the marine environment are of terrestrial origin, and identify a new ecological role for Bathyarchaeota. This thesis integrates diverse biogeochemical techniques across the terrestrial-marine interface to examine important open questions in globally important carbon budgets, merging isotope geochemistry, microbiology and earth science. The findings contribute to our understanding of the modern carbon cycle and the impact of anthropogenic perturbations of the last decades and into the future.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158872</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reproduction, settlement, and phenology of intertidal barnacles: Implications for larval dispersal</title>
<link>https://hdl.handle.net/1721.1/158871</link>
<description>Reproduction, settlement, and phenology of intertidal barnacles: Implications for larval dispersal
Weinstock, Jane B.
Knowledge of the consequences of ocean warming on marine populations and communities is urgent. Warming oceans are predicted to result in changes to the seasonal timing of reproduction and settlement (phenology); faster development rates and, for crustaceans, smaller larvae; reduced larval dispersal distances; and reduced connectivity between coastal populations. However, these predictions are largely based on laboratory and modelling studies, with little observational research to explore how these interactions unfold in natural ecosystems where temperature variability is pervasive. In this thesis, I investigate the links between reproduction and settlement timing of intertidal barnacles, and I explore the extent to which the timing of these events is explained by environmental and astronomical cycles and by water column conditions. In Chapter 2, I assess the cycles driving Chthamalus fissus reproduction and settlement in Southern California, and I offer a first order estimate of alongshore larval transport. I found that barnacles were reproductively active almost year-round, with clear lunar cyclicality and modest seasonality. Conversely, settlement exhibited little cyclicality on any timescale. Chapters 3, 4, and 5 focus on the effects of temperature on Semibalanus balanoides early life history along a steep temperature gradient in the northwest Atlantic over twenty years of warming. In Chapter 3, I investigate the effects of intertidal temperature on reproduction timing, analyzing separately the processes of fertilization, embryonic brooding, and larval release. In Chapter 4, I estimate larval duration in natural populations, and I measure the impact of temperature on larval duration in the laboratory and field. In Chapter 5, I investigate the effects of water temperature on larval size at settlement. I found that warmer nearshore temperatures significantly correlated with shorter brooding times of developing embryos, shorter field-estimated larval duration, and smaller larval settlers. Notably, the interplay between benthic reproduction, pelagic development, and temperature variability across space and time created counter-intuitive patterns in larval duration, size, and likely dispersal. Together, these findings point to the importance of reproductive timing in determining dispersal and population connectivity, and they highlight the need for extensive field measurements to quantify phenology and phenology shifts in benthic systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158871</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radium and Mercury Dynamics in the Arctic: Investigating Terrestrial Inputs, Groundwater Discharge, and Chemical Cycling in a Changing Climate</title>
<link>https://hdl.handle.net/1721.1/158870</link>
<description>Radium and Mercury Dynamics in the Arctic: Investigating Terrestrial Inputs, Groundwater Discharge, and Chemical Cycling in a Changing Climate
Bullock, Emma Jacqueline
The Arctic Ocean is distinctive due to its extreme seasonal variations in temperature and significant terrestrial inputs, including freshwater, carbon, nutrients, and toxins. Of particular concern is mercury (Hg) in its neurotoxic form, methylmercury (MeHg), which is already beginning to adversely affect Arctic human populations and wildlife. However, the region’s harsh conditions and remoteness have made conducting seasonal chemical and hydrological studies challenging. Tracers of boundary inputs, such as the radium (Ra) isotope quartet, offer potential for tracking and quantifying riverine and submarine groundwater discharge (SGD) of species like Hg into the Arctic Ocean. This thesis employs seasonal data and laboratory experiments to investigate the factors influencing terrestrial Ra inputs to the Arctic Ocean, quantifies SGD and associated Hg inputs to an Arctic coastal lagoon, and elucidates the chemical and geological factors influencing Hg cycling in Arctic groundwater.&#13;
&#13;
Using historical and unpublished datasets combined with new laboratory investigations, differences in inputs of riverine Ra isotopes between the North American and Eurasian land masses were identified. The findings revealed higher Ra fluxes from the North American continent, attributed to greater sediment loads and lower organic matter in rivers compared to those on the Eurasian land mass. Subsequently, Ra data from five extensive field campaigns to Simpson Lagoon, Alaska, provided insights into Ra cycling on a more localized scale. These campaigns offered the first seasonal perspective on supra-permafrost SGD along an Arctic coastline, suggesting that SGD fluxes may rival those of rivers along the Beaufort Sea coast. Concurrently collected Hg groundwater concentrations allowed for the development of the first estimates of Hg fluxes from groundwater to the Arctic Ocean. If these estimates hold true along the rest of the Pan-Arctic coastline, they could significantly alter our understanding of microbial MeHg uptake in the Arctic Ocean. Finally, sediment cores from Simpson Lagoon and two other locations along the Beaufort Sea coast were used to examine how changing groundwater conditions, such as changing salinity, temperature, and redox conditions, influence Hg cycling. These experiments, alongside findings from Simpson Lagoon groundwater, indicate that Hg cycling in recently thawed permafrost sediments involves a complex interplay between organic material, metal oxides, and sulfide species, with groundwater conditions and soil carbon content playing crucial roles in Hg mobilization.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158870</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an Artificial Neuroscience: Analytics for Language Model Interpretability</title>
<link>https://hdl.handle.net/1721.1/158869</link>
<description>Towards an Artificial Neuroscience: Analytics for Language Model Interpretability
Gurnee, Robert Wesley
The growing deployment of neural language models demands greater understanding of their internal mechanisms. The goal of this thesis is to make progress on understanding the latent computations within large language models (LLMs) to lay the groundwork for monitoring, controlling, and aligning future powerful AI systems. We explore four areas using open source language models: concept encoding across neurons, universality of learned features and components across model initializations, presence of spatial and temporal representations, and basic dynamical systems modeling.&#13;
&#13;
In Chapter 2, we adapt optimal sparse classification methods to neural network probing, allowing us to study how concepts are represented across multiple neurons. This sparse probing technique reveals both monosemantic neurons (dedicated to single concepts) and polysemantic neurons (representing multiple concepts in superposition) in full-scale LLMs confirming predictions from toy models. In Chapter 3, we identify and exhaustively catalog universal neurons across different model initializations by computing pairwise correlations of neuron activations over large datasets. Our findings show that 1-5\% of neurons are universal, often with clear interpretations, and we taxonomize them into distinct neuron families.&#13;
&#13;
To investigate spatial and temporal representations, we analyze LLM activations on carefully curated datasets of real-world entities in Chapter 4. We discover that models learn linear representations of space and time across multiple scales, which are robust to prompting variations and unified across different entity types. We identify individual "space neurons" and "time neurons" that reliably encode spatial and temporal coordinates. In Chapter 5, we use optimal sprase regression techniques to improve the sparse identification of nonlinear dynamics (SINDy) framework, demonstrating improved sample efficiency and support recovery in canonical differential systems. We then leverage this improvement to study the ability of LLMs to in-context learn dynamical systems and find internal representations which track the underlying system state.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158869</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>OPTASAT: An Open-Source, Flexible Software Framework for Small Satellite Operations</title>
<link>https://hdl.handle.net/1721.1/158868</link>
<description>OPTASAT: An Open-Source, Flexible Software Framework for Small Satellite Operations
Murphy III, Thomas Joseph
The unprecedented growth in access to space has created a corresponding growth in the number of spacecraft and the number of people operating spacecraft. This has meant that many of these operators are operating spacecraft for the first time. Gone are the days when the only operators of spacecraft were national governments, militaries, and massive corporations. The operators of small spacecraft today include many early-career individuals who need the tools to enable them to make strong decisions in the behavior of their spacecraft. The tools for operating spacecraft are often overlooked by teams focusing on the spacecraft themselves, but these operating tools are critical for mission success. Spacecraft operations tools have not developed in a similarly low-cost, widespread fashion as the spacecraft have. The best tools for modeling and understanding the situation of a satellite in space remain locked behind high barriers to entry including high cost, long training, and complex interfaces. In the same way that satellites have gone from the size of automobiles to the size of toasters, the software for operating them needs to go from expensive, complicated, high-performing suites to simple, flexible, approachable options that are accessible to the democratized space operators. New spacecraft operations staff need straightforward, direct interfaces which give them the knowledge of where their spacecraft is, where it will be, and what it will be able to do, and they need to know when all the options at their disposal are viable. Operators also need to be given the capability to adjust their software in whatever ways are necessary to tailor it to the particular parameters of their missions, to reflect the incredible variety of spacecraft and missions that exist today. A gap exists in spaceflight software. Users need software that can perform their mission planning tasks in the short term and to inform them of the upcoming parameters of their spacecraft which concern them, whether this is the spacecraft’s location, solar illumination, orientation, or any other property which is relevant to their particular mission. This software must also allow the users to be aware of the expected output of their sensors, especially imaging sensors, such that they may have an understanding of what they are imaging and what it ought to look like. Finally, this software must be open-source, enabling the user to audit the software and make changes to the software to customize it to their preferences, which may differ from anything the original software developer could have imagined. Such spaceflight software does not yet exist. This dissertation develops and presents OPTASAT, the Open-source Python Tool for Awareness of Spacecraft and Analysis of Telemetry, which provides an extensible, modular interface for incorporation of multiple tools which contextualize spacecraft data in a manner which maximizes usefulness for the operators. A priority is visualization of data to facilitate rapid understanding and distillation of the complexity of a spaceflight operation. This software has been released as a fully-featured, open-source software toolkit which performs the mission analysis components deemed most crucial to those who stand to benefit from it. This software is intended to fulfill the needs of small spacecraft missions. Several particular application cases are studied, including that of an Earth Sensing mission, and Astronomy mission, and modeling communications for a real laser crosslink mission. These case studies are evaluated for their ability to present the relevant information to the operator. For Earth Sensing, this involves displaying information regarding the spacecraft’s location with respect to the Earth, and enabling the selection of ground targets for imaging. For astronomy, the relevant information concerns the stars visible in the sky, and the spacecraft’s relationship to sources of interference like the Sun and Moon. For the laser crosslink example, we study the operator’s understanding of the spacecraft as they pass over a ground station and determine the operational configurations available for this communication. OPTASAT fills gaps in the field. OPTASAT presents users with a tool which is flexible and intuitive to use for understanding data from spacecraft in a way that is not currently available in the offerings on the market. Additionally, it takes functionality that is currently available in proprietary paid software and makes it available for free, in an open source offering that is accessible to everyone. OPTASAT will allow spacecraft operators (especially those operating spacecraft for the first time) to confidently know the state of their spacecraft, enabling them to make the best decisions for their satellites. This will reduce barriers to entry and smooth the learning curve, reducing the amount of overhead to new spacecraft operators. OPTASAT will be yet another step in the ongoing process of making space more accessible to a larger pool of users.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158868</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Hidden Roots of Neoliberal Success in Agrarian Transformation: State Engagement, Farmer Professionalization, and Technological Interdependence in the Senegal River Valley</title>
<link>https://hdl.handle.net/1721.1/158867</link>
<description>The Hidden Roots of Neoliberal Success in Agrarian Transformation: State Engagement, Farmer Professionalization, and Technological Interdependence in the Senegal River Valley
Spielberg, Brian Jonars Besana
Recent scholarship celebrates irrigated rice in the Senegal River Valley (SRV) as a success story. This is remarkable considering the SRV’s history of agrarian transformation, which critics characterize as incoherent, erratic, and self-destructive. How did this turnaround happen? How did good seeds emerge from bad soil? Conventional explanations point to enlightened market-based reforms and technological upgrading following state withdrawal from most agricultural activities. In other words, the SRV is portrayed as a triumph of neoliberalization. This dissertation offers an alternative, additive view. In Paper 1, I situate the SRV’s transformation in broad historical context, showing how notions of development, technological change, and poverty alleviation have evolved and the implications for what strategies are pursued. I illustrate how a popular contemporary development model— appropriate technology (AT) 2.0, an evolution of Schumacher’s 1970s AT 1.0—that valorizes smallscale technologies and market-led interventions is attractive in explaining successes like the SRV, even as it proves ultimately reductive. In Paper 2, I demonstrate how the state, despite policies curtailing its activities and a dominant narrative asserting its disengagement, continues to play an active role in the SRV. By imparting practical skills, such as pump operation, contract negotiation, and bookkeeping, state action helped farmers professionalize. A durable effect is a “we’re in this together” state-farmer mentality. When this relationship is tested, well-respected intermediaries, often religious leaders, intercede. In Paper 3, I show how farmers construct assemblages of resources, skills, and knowledge to achieve their goals. They rely on negotiation skills and social ties with local leaders, appealing to “public interest” couched in religious terms. In forsaking key aspects of the dominant assemblage to pursue alternatives, farmers exercise their agency and enhance market functioning by permitting flexibility, acknowledging technological interdependencies, and mitigating recurrent risks. This dissertation offers hope that successful agrarian development is possible in challenging, resource-constrained environments. Based on 11 months of fieldwork, I show how state and farmer actions bolstered market reforms, underpinning their success. In centering on-the-ground realities, I move beyond dominant explanations and neat theoretical classifications to reveal underreported but nonetheless fundamental processes and mechanisms through which development occurs.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158867</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cost Optimized Logistics for Commercial Operations in Low Earth Orbit and Cislunar Space</title>
<link>https://hdl.handle.net/1721.1/158866</link>
<description>Cost Optimized Logistics for Commercial Operations in Low Earth Orbit and Cislunar Space
Brown, Ireland
Designing profitable mission and logistics architectures is necessary to establish a profitable commercial market and support a robust space economy. It is the goal of the National Aeronautics and Space Administration (NASA) to establish such an economy in low Earth orbit (LEO) through the implementation of commercial LEO destinations and to commission self-sustaining lunar infrastructure through the Artemis missions. The ISS and the Apollo lunar landers demonstrated the ability to provide safe and reliable habitation, but the cost to support these missions has been on the order of billions of United States Dollars (USD). Minimizing the operational costs of commercial space systems will be required if commercial companies expect to generate a profit from their services. To address this, this thesis derives and demonstrates a manual cost optimization method for space system mission architectures, with respect to logistical and system design. In tandem, a computational tool called the Cost model for Space system Operations (COST-O) was developed. The demonstration included the iteration of a logistics and system design vector for two cases: a commercial LEO space station, and a commercial lunar in-situ resource utilization (ISRU) liquid oxygen generation system. These mission architectures were modelled and simulated in SpaceNet which first analyzed for feasibility and then were processed by COST-O. This data was used to make financial forecasts and were analyzed for cost sensitivity. The results suggest that for a commercial LEO space station, a closed loop ECLSS, large stockpile of resources, reduced resupply cadence, and a combination of tourists and visiting crew would be a profitable architecture at the crew capacity of at least three paying customers present on the station per day with an annual operational cost of 1,129,731,710 USD. Profits would be achieved by the end of ten years of steady state operations at the current market price of 3.12 million USD per crew member per day. Attempts to minimize this cost should first be made in the cadence of funded astronaut technician flights, as crew launches contribute most to the overall operational cost. Future work should address ways to minimize this, such as reducing the required amount of astronaut technicians that must be present at any given time. For a commercial lunar ISRU liquid oxygen generation system, an architecture supporting a closed loop system, using Starship as the launch and landing vehicle, a prepositioned stockpile of resources at the lunar surface, and a hydrogen reduction agent is most cost optimal, with an annual operating cost of 19,275,486,559 USD, and profitability achieved at the design rate of twenty metric tons of liquid oxygen produced and sold per year. At the current market price of 1.2 million USD per kilogram, the system would be profitable by the end of the first year of steady state operations. Attempts to minimize this operational cost further should improve the recyclability of the system. Future work should evaluate added robustness to the architecture by delivering multiple systems and should model deliberate cargo packing decisions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158866</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>For and Beyond the Plaques: Sustainable Certification Adoption&#13;
 and Its Impact on Real Estate Decision-Making in the Boston-Cambridge Market</title>
<link>https://hdl.handle.net/1721.1/158865</link>
<description>For and Beyond the Plaques: Sustainable Certification Adoption&#13;
 and Its Impact on Real Estate Decision-Making in the Boston-Cambridge Market
Huang, Shenglin
As demand for green and healthy buildings grows, real estate developers face complex decisions regarding building certification adoptions, which have become influential in real estate market dynamics. This thesis investigates how developers in the competitive Boston-Cambridge area navigate the sophisticated certification landscape—focusing on LEED, ENERGY STAR, WELL, Fitwel, and WiredScore/SmartScore—to gain competitive advantages, attract and retain tenants, maximize financial performance, and align with regulatory requirements and ESG goals.&#13;
Using a mixed-methods approach, including quantitative analysis of certification overlaps and trends, along with qualitative insights from industry interviews, the study provides a comprehensive understanding of how real estate developers strategically use certifications to influence asset value while meeting tenant and investor expectations. Findings offer potentially actionable insights into how certifications shape market positioning and inform the decision-making process in real estate development.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158865</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Singlet exciton fission-enhanced silicon photovoltaics: Interfacial engineering, device design and spectroscopic technique development</title>
<link>https://hdl.handle.net/1721.1/158864</link>
<description>Singlet exciton fission-enhanced silicon photovoltaics: Interfacial engineering, device design and spectroscopic technique development
Nagaya, Narumi
The growing global energy demand combined with resource and space limitations necessitate enhancements in crystalline silicon solar cells, which are the current dominant solar technology. However, their efficiencies have only increased incrementally over the recent 20 years, as they are starting to approach the theoretical efficiency limit. The main source of loss is thermalization, where energy in excess of the bandgap absorbed by silicon is lost as heat. Singlet exciton fission in organic molecules has been proposed to reduce these losses. By having the organic layer absorb the high energy light and transferring the triplet excitons generated from the singlet fission process to silicon, the photocurrent in this spectral region can be doubled, with the potential of raising the efficiency from the traditional limit of 29.4 % to up to 42 %.&#13;
&#13;
The greatest challenge with these devices has been to demonstrate an increase in the silicon photocurrent, a necessary condition to show that the technology is viable. Scientifically, there are three main components to this problem. The first is to successfully couple the triplet excitons to silicon. The second is that not much is understood regarding the exciton and charge carrier dynamics at this interface. Finally, the silicon solar cell architecture should also be considered to extract transferred carriers effectively.&#13;
&#13;
This thesis tackles these three parts from an interfacial materials, device architecture and spectroscopy approach. Using tetracene as the singlet fission layer and n-doped silicon, we show that defect-induced states in a thin interlayer of hafnium oxynitride that lie near the band edge of silicon are beneficial for triplet exciton transfer. We also identify that triplet-induced electric field-effect passivation is beneficial for the triplet sensitization process of silicon, and design a new bilayer interface consisting of a zinc phthalocyanine donor layer that introduces preferential near- silicon band edge states, and an ultrathin oxide chemical passivation layer. We then study various device architectures, confirming the importance of using a device designed to extract surface charge carriers efficiently, demonstrating the first enhancements in single-junction silicon solar cell external quantum efficiencies and photocurrent from singlet fission. Finally, we build and use advanced spectroscopy techniques and numerical frameworks to study exciton and charge carrier dynamics in singlet fission-sensitized solar cell materials, confirming that the triplet excitons are contributing to all the positive effects observed in the devices.&#13;
&#13;
These results have shown that singlet fission-sensitized silicon solar cells are a viable technology for enhancing silicon solar cell efficiencies beyond the conventional single-junction limit. This interface remains a rich area for fundamental scientific studies, involving coupling between molecular dark states to bulk silicon. We hope that the key findings can help direct research efforts towards scalable implementation of this technology, and stress that the fundamental understanding of the interface also has broad implications to other silicon technologies that can benefit from enhanced quantum yields, including photodetectors.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158864</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Advances in Range-Aided Navigation</title>
<link>https://hdl.handle.net/1721.1/158863</link>
<description>Algorithmic Advances in Range-Aided Navigation
Papalia, Alan A.
This thesis contributes to the advancement of range-aided simultaneous localization and mapping (RA-SLAM) through algorithmic developments and real-world demonstrations. Broadly speaking, SLAM is the process by which an agent combines sensor measurements to simultaneously create a map of the world and localize itself within this map. SLAM has been called the ‘holy grail’ of field robotics, and in many instances it is a critical enabling capability for autonomous agents to operate in the real world. RA-SLAM is the specific case of SLAM which incorporates point-to-point distance measurements (e.g., distance measurements between an autonomous underwater vehicle and an acoustic buoy) into the inference process. The ability to leverage such measurements is desirable, as they can help in resolving ambiguities (e.g., am I in hallway A or B) and the relevant sensors are often low-cost and simple to integrate (and thus pose the potential to be widely deployed). However, there are theoretical challenges that have historically limited the reliability of RASLAM approaches. At the root of these challenges is the issue that a single range measurement does not uniquely determine the relative position between two points. In state-of-the-art RASLAM formulations, this ambiguity manifests as non-convexity in the maximum a posteriori inference problem. As a result of this non-convexity, standard local-search optimizers are highly dependent on quality initializations to obtain the correct state estimate. To address this issue of reliability, this thesis presents the first certifiably correct algorithm for RA-SLAM. This algorithm, Certifiably Correct RA-SLAM (CORA), is capable of (i) obtaining globally optimal solutions for many real-world RA-SLAM problem instances and (ii) providing certificates of correctness for these solutions. CORA leverages a novel semidefinite programming (SDP) relaxation of the RA-SLAM problem, which it solves efficiently using the Riemannian Staircase methodology. This methodology allows CORA to typically obtain globally optimal solutions faster than the existing state-of-the-art local solvers. These results expand our understanding of problems suited for efficient global solvers and highlight the key problem structures that appear necessary to develop and deploy such solvers, pointing towards exciting future directions in trustworthy model-based autonomy. We demonstrated the performance of CORA on a range of real-world RA-SLAM datasets, including a set of large-scale multi-agent experiments conducted as part of this work. In these experiments CORA reliably estimates agents’ trajectories in both single- and multi-robot settings. CORA gracefully scales to large problems consisting of multiple agents and tens of thousands of robot poses. These experiments not only validate CORA’s performance, but also fill an existing gap in open-source datasets available to the research community and provide practical insights to guide future deployments of autonomous navigation systems in large, complex environments.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158863</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using AI to Improve Price Transparency in Real Estate Valuation</title>
<link>https://hdl.handle.net/1721.1/158862</link>
<description>Using AI to Improve Price Transparency in Real Estate Valuation
Xu, Cunjia
This thesis explores the integration of artificial intelligence (AI) into real estate valuation, focusing on visual property attributes to enhance traditional Hedonic models. By incorporating Vision Language Models (VLMs) and generative AI, the research evaluates the potential of these technologies to assess non-standard variables like aesthetic appeal, condition and cohesiveness of interior and exterior property photos. The study contrasts traditional hedonic regression models, which rely on quantifiable factors such as square footage and location, with a new approach that includes AI-generated scores derived from property photos. The study employs three distinct models: the No_Rubric Model, the Composite Model, and the Verbose Model with the Hedonic model serving as the baseline for evaluating their performance. The results demonstrate that incorporating visual data significantly improves model&#13;
accuracy, aligning valuations more closely with buyer preferences and sold prices. This shift addresses the industry's need for price transparency and highlights how developers can design properties that better meet market demands.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158862</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Flood Risks to City Infrastructure Systems&#13;
Utilizing Scalable, Time Sensitive Modeling</title>
<link>https://hdl.handle.net/1721.1/158861</link>
<description>Predicting Flood Risks to City Infrastructure Systems&#13;
Utilizing Scalable, Time Sensitive Modeling
Boukin, Katerina
Flooding is emerging as the most expensive and frequent natural hazard around the world. Floods are highly dynamic in nature and cause physical damage to our built environment, loss of life, economic damage, and major impacts to society. An example of this is the at-ground road system, which comprises 30-60% of a city’s area in the US, is highly susceptible to flood damage, while still needs to act as evacuation routes for local residents. Similarly, the underground built system is extremely vulnerable to flooding damage as well as life risk to anyone within it. With urban landscapes constantly evolving, accurately predicting flood propagation and extent is imperative to mitigate these risks, especially as floods worsen due to climate change.&#13;
&#13;
Historically, the focus of flood risk assessment through industry and academia has been on the coastal urban environment, assessing the impact of fluvial flooding. This resulted in many risk assessment tools that mostly caters to the infinite amount of flood water identified from a riverine or coastal fluvial flooding. As for the rain-driven impact, the common practice simply changed the flood modeling to pluvial oriented, keeping the rest of the risk tool components identical for the different flood mechanisms. For pluvial flooding, existing urban flood modeling tools such as SWMM and PC-SWMM are limited by their catchment-based approach, neglecting surface runoff dynamics and spatial-temporal flood impacts. Consequently, these tools fail to capture the full extent of rain-driven floods, underestimating their severity and impact on urban environments.&#13;
&#13;
Addressing this gap requires sophisticated simulations that account for rain event characteristics and city morphology, yet such simulations are computationally demanding and require detailed urban data. Currently, flood impact analysis tools lack specificity for pluvial flood risks and do not address the risks to various city systems beyond building damage. As a result, the contribution of pluvial floods to overall flood risks is underestimated, compromising infrastructure resilience. As flood model results are a critical component in flood risk assessments, the accuracy of spatial temporal urban flood results will allow the pluvial flood impact assessment to be simplified and the flood damage to the different urban systems will be quantified.&#13;
&#13;
This research aims to develop a scalable and streamlined method to accurately quantify the risks of rain-driven floods to urban infrastructure systems. It addresses three key questions: (1) To what extent does current practice underestimate pluvial flood impacts? (2) What are the impacts of pluvial flooding on pavement systems when incorporating spatial-temporal modeling? (3) What is the significance of modeling pluvial floods using urban underground spaces? Using advanced flood modeling and numerical soil-water infiltration techniques, this research will quantify damages and lifecycle impacts to pavement and underground spaces systems. The method will provide information on the spatial and temporal distribution of flood damage and will enable scaling up single-element assessments to system-wide impacts. This holistic approach will improve urban flood risk management, supporting informed decision-making and the development of resilient infrastructure systems.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158861</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Examining the placenta’s role in neurodevelopment in the context of maternal obesity</title>
<link>https://hdl.handle.net/1721.1/158860</link>
<description>Examining the placenta’s role in neurodevelopment in the context of maternal obesity
Gunter-Rahman, Fatima M.
The placenta is a key organ determining fetal development and likely contributes to programming of long-term offspring health, in particular neurodevelopment. Various maternal exposures, such as psychosocial stress, diabetes, infection, and high body mass index (BMI) are associated with higher risks of impaired neurodevelopment in the offspring. One third of women in the United States are affected by maternal obesity (MO) during pregnancy, making it one of the most common exposures.&#13;
We profiled the term placental transcriptome in humans using single-nucleus RNA-seq, comparing expression profiles in MO versus lean conditions, in each of the two faces of the placenta separately. On both sides of the placenta across several cell types, MO was associated with upregulation of hypoxia response genes. On only the maternal-facing side, hypoxia gene expression was associated with offspring neurodevelopment outcomes measured at multiple time-points, in the Genetics of Glucose regulation in Gestation and Growth (Gen3G) cohort, an independent pre-birth cohort with bulk RNA-seq from placental tissue. We leveraged Gen3G to determine genes that correlated with impaired neurodevelopment and found these genes to be most highly expressed in extravillous trophoblasts (EVTs). EVTs further showed the strongest correlation between neurodevelopment impairment gene scores (NDIGSs) and the hypoxia gene score. We validated these findings in EVTs in an independent single-cell RNA-seq cohort from second trimester placenta, and found that cultured EVTs have increased NDIGSs in response to exposure to hypoxia. These data suggest that hypoxia in EVTs may be a key process in the neurodevelopmental programming of fetal exposure to MO. Our work opens up new directions of research, such as exploring applications of antioxidants to potentially mitigate some of the offspring consequences associated with MO.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158860</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microfluidic Platform for Vascularized Tissue Models</title>
<link>https://hdl.handle.net/1721.1/158859</link>
<description>Microfluidic Platform for Vascularized Tissue Models
Johnson, Matthew
This thesis presents a microfluidic platform designed to support 3D vascularized tis­sue models for microphysiological systems. The platform delivers pneumatic pressure and vacuum signals to drive fluid flow and pressure on tissue culture devices with integrated pumps and back-pressure regulators. The mechanical performance of the pumps and back-pressure regulators is characterized. Tissue compartments in each device contain endothelial and stromal cells suspended in a hydrogel during culture. An oxygenating reservoir stores and replenishes oxygen in circulating cell culture me­dia. During assembly, screws are used to compress an elastomeric membrane, forming a seal and transmitting pneumatic pressure signals from the connection manifold to acutate the fluidic control elements. After a biological experiment the tissue culture devices can be disassembled, cleaned, and re-used, thus enabling cost-effective experi­mentation and prototyping. Each of the 4 layers of the tissue culture devices arc ma.de of thermoplastic polymers, and their design is translatable to injection molding for future production at scale. The design and manufacturing methods for the platform and individual device features are discussed. Two major biological experiments are presented to demonstrate the platform's ability to support emergent vascularization in the tissue culture device over 7 days. Microscope images show development of perfusable microvessel networks.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158859</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Engineered Skeletal Muscle Rings as Actuators Using Strain Sensing Methods</title>
<link>https://hdl.handle.net/1721.1/158858</link>
<description>Characterizing Engineered Skeletal Muscle Rings as Actuators Using Strain Sensing Methods
Rosado, Laura M.
A novel instrument was designed to characterize a force exertion model of engineered skeletal muscle rings. The instrument uses strain gauges to transduce a muscle ring contraction and has a verified resolution of 5 μN and 1.4 μm over the ranges of 5 μN and 1400 μm respectively. Experiments were carried out with four muscle ring specimens at six different structural stiffnesses. Each ring was excited at 1 Hz for 30 seconds while force and displacement was monitored. It was determined that the relationship between muscle contractile distance and force is related by a negative power function.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158858</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strange Attitudes on Top</title>
<link>https://hdl.handle.net/1721.1/158857</link>
<description>Strange Attitudes on Top
Močnik, Maša
This dissertation investigates how attitude verbs of belief and desire engage with embedded material of a similar nature. Chapter 1 looks at the (cross-linguistically unusual) Slovenian existential doxastic attitude verb dopuščati (‘allow for the possibility’) and the embedding of epistemic modal verbs under it. Chapter 2 looks at the (overall puzzling) want and its Slovenian counterpart hoteti, and at their behaviour with respect to embedded doxastic attitudes, epistemic adverbs, and epistemic adjectives. Chapter 3 looks at the (cross-linguistically unusual) Koryak variable-force variable-flavour attitude verb ivək (‘think’, ‘allow for the possibility’, ‘say’, ‘suggest’) and at how its apparent bouletic flavour (‘wish’, ‘hope’, ‘fear’) is derived with the help of covert desiderative components inside the embedded clause. Attitude verbs have the standard role as quantifiers over possible worlds (Hintikka 1962), parameters of evaluation are assumed to contain a set of worlds called the information state (Yalcin 2007; a.o.), which the attitude verb modifies and passes to the embedded clause, while the epistemic modal base is taken to be ‘local’, forming a subset of the information state (Mandelkern 2017, 2019a). Some of the overarching theoretical contributions are the introduction of a new parameter of evaluation (‘selected state’), which is crucial in modelling embedding under non-universal attitude verbs, and a refined view of epistemic modality. Subjective epistemic modality is proposed to involve a second constraint on the shape of the modal base, whose effect is to strengthen embedded necessity claims and help derive the infelicities observed in chapters 1 and 2. We also address the connection between beliefs and desires in the context of various desire interpretations (wants in chapter 2, hopes and wishes in chapter 3).
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158857</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of System Theoretic Process Analysis (STPA) onNovel Tiltrotor Aircraft to Prevent Mode Confusion</title>
<link>https://hdl.handle.net/1721.1/158856</link>
<description>The Use of System Theoretic Process Analysis (STPA) onNovel Tiltrotor Aircraft to Prevent Mode Confusion
Basnight, Natalie Ann
Initiatives are underway to develop tiltrotor and vertical take-off and lift (VTOL) aircraft that enhance commercial and military aviation’s autonomy, capability, and survivability. These designs integrate rotary and fixed-wing elements, introducing distinct safety considerations. These safety concerns are largely due to the differing mental models of operators trained in either rotary or fixed-wing aviation, alongside the rising reliance on autonomy. The traditional hazard analysis techniques (e.g., Fault Tree Analysis and Failure Models and Effects Criticality Analysis) do not adequately account for system component interactions or human factors in complex new aircraft designs. System Theoretic Process Analysis (STPA) is a powerful new hazard analysis technique for novel tiltrotor aircraft that includes their unique safety requirements. It is a top-down system hazard analysis technique that identifies loss scenarios (N. G. Leveson and J. Thomas Mar2018). It satisfies the tasks described in MIL-STD-882E (Department of Defense 2023). This research demonstrates the use of STPA to identify and mitigate potential instances of mode confusion between the operator’s mental model and the autonomy’s decision logic in the uniquely dynamic tilt-rotorcraft environment. Two previous tiltrotor aircraft accidents are analyzed utilizing Causal Analysis based on System Theory (CAST) to help set a framework for the importance of human and machine collaboration in systems. These accidents show a trend in the dangers of aircraft system mismanagement between various controllers. The CAST results for these accidents help provide information about how to prevent these types of incidents in the future, setting the stage for the use of STPA on novel tiltrotor aircraft, as demonstrated in this thesis. STPA can be used before design, implementation, and fielding, allowing for better early design of systems and reducing the cost of later redesign or modification.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158856</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simple Models for Complex Tropical Dynamics</title>
<link>https://hdl.handle.net/1721.1/158855</link>
<description>Simple Models for Complex Tropical Dynamics
Tuckman, P.J.
Studying Earth's tropics is an essential part of understanding the climate, simulating the Earth system, and predicting the societal impacts of weather. In this thesis, we use a hierarchy of models -- including analytically tractable equations, simplified simulations, and full general circulation models -- to study tropical phenomena including the Hadley Circulation, the Inter-Tropical Convergence Zone (ITCZ), the South Asian monsoon, Pacific and ENSO seasonality, the Walker Circulation, and the modeling of the tropical energy budget. We begin with an examination of tropical SSTs and the ITCZ under warming, finding that the Hadley cells weaken and tropical SST gradients decrease in a warmer climate. The ocean's subtropical cells strengthen and transport more energy in a warmer climate, further flattening SST gradients. The ITCZ, meanwhile, increases in strength with warming because of the exponential relationship between humidity and temperature, and the presence of a dynamic ocean changes a single-ITCZ with a sinusoidal seasonal cycle to a double-ITCZ with a square wave seasonal cycle. Next, we study the ``monsoonal mode,'' an energy and precipitation anomaly triggered by the South Asian Monsoon that moves into the West Pacific during Northern Hemisphere autumn. The monsoonal mode is discussed as a possible underlying cause of the seasonality of the Pacific, i.e., that the West Pacific and ENSO both have seasonalities that favor one season despite being on the equator. To show this, ENSO seasonality is examined using simplified simulations and an energy budget of the Central-Eastern Equatorial Pacific. Similar techniques are then used to study ENSO events in warmer climates, and it is found that the Pacific zonal SST gradient and the Walker circulation, which are the sources of ENSO instability, weaken with warming, decreasing  the magnitude of ENSO events. Lastly, we assess the energy budget of CMIP6 models. It is shown that all CMIP6 models have more energy input to the deep tropics than ERA5 reanalysis, and this bias is bigger in the Southern Hemisphere. The hemispheric asymmetry in this bias can be traced back to radiation absorbed by the atmosphere, which is associated with dust (for shortwave radiation) and total column water (for longwave radiation). As a whole, this thesis demonstrates the utility of studying complex problems with simple models and deepens our understanding of Earth's tropics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158855</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ponderomotive Forces in Pilot-Wave Hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/158854</link>
<description>Ponderomotive Forces in Pilot-Wave Hydrodynamics
Evans, Davis J.
Droplets bouncing on a vibrating bath may self-propel (or ‘walk’) via a resonant interaction with their self-induced pilot wave. In pilot-wave hydrodynamics (PWH), the spontaneous emergence of coherent, wave-like statistics from chaotic trajectories has been reported in several settings. Owing to the similarity of PWH to Louis de Broglie’s realist picture of quantum mechanics, the question of how such statistics emerge has received considerable recent attention.&#13;
&#13;
A compelling setting where coherent statistics emerge in PWH is the hydrodynamic analog of the quantum corral. When walking droplets are confined to a circular cavity or ‘corral’, a coherent statistical pattern emerges, marked by peaks in the positional histogram coincident with extrema of the cavity eigenmode. Stroboscopic models that idealize the drop’s bouncing dynamics as being perfectly resonant with their Faraday wave field have proven incapable of capturing the emergent statistics.&#13;
&#13;
In this thesis, we present new experimental and theoretical findings in a variety of pilotwave hydrodynamical settings where non-resonant bouncing plays a key role in the droplet dynamics and emergent statistics. First, we find that modulations to resonant bouncing influence the stability threshold of a Bravais lattice. Second, we demonstrate that resonant bouncing can be disrupted by the imposition of suboctave driving, which may be used to induce a rearrangement of bound states of bouncing droplets.&#13;
&#13;
We then proceed to an integrated experimental and theoretical study of the hydrodynamic corral, highlighting the role of non-resonant bouncing in the emergent statistics. We first introduce a new experimental method for simultaneously measuring the drop position and pilot wave height. We then report new measurements of the pilot wave and vertical bouncing dynamics. We demonstrate that the complex pilot wave arising in corrals may play the same role as suboctave driving in disrupting resonant walking. Our experimental findings motivate a new theoretical framework that predicts that modulations in the histogram emerge as a consequence of ponderomotive effects induced by non-resonant bouncing. We then connect the ponderomotive drift observed in hydrodynamic corrals to extant theories of quantum mechanics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158854</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Computing from Graphs</title>
<link>https://hdl.handle.net/1721.1/158853</link>
<description>Quantum Computing from Graphs
Khesin, Andrey Boris
While stabilizer tableaus have proven exceptionally useful as a descriptive tool for additive quantum codes, they otherwise offer little guidance for concrete constructions or coding algorithm analysis. We introduce a representation of stabilizer codes as graphs with certain structures. Specifically, the graphs take a semi-bipartite form wherein input nodes map to output nodes, such that output nodes may connect to each other but input nodes may not. Intuitively, the graph’s input-output edges represent information propagation of the encoding circuit, while output-output edges represent the code’s entanglement structure. We prove that this graph representation is in bijection with tableaus and give an efficient compilation algorithm that transforms tableaus into graphs. We then show that this map is efficiently invertible, which gives a new universal recipe for code construction by way of finding graphs with sufficiently nice properties.&#13;
&#13;
The graph representation gives insight into both code construction and algorithms. To the former, we argue that graphs provide a flexible platform for building codes particularly at small non-asymptotic scales. We construct as examples several constant-size codes and several infinite families codes. We also leverage graphs in a probabilistic analysis to extend the quantum Gilbert-Varshamov bound into a three-way distance-rate-weight trade-off. To the latter, we show that key coding algorithms, distance approximation, weight reduction, and decoding, are unified as instances of a single optimization game on a graph. Moreover, key code properties such as distance, weight, and encoding circuit depth, are all controlled by the graph degree. We give efficient algorithms for producing simple encoding circuits whose depths scale as twice the degree and for implementing logical diagonal and certain Clifford gates with non-constant but reduced depth. Finally, we construct a simple efficient decoding algorithm and prove a performance guarantee for a certain classes of graphs. These results give evidence that graphs are generically useful for the study of quantum computing and its practical implementations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158853</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Underwater Semantic Simultaneous Localization and&#13;
Mapping</title>
<link>https://hdl.handle.net/1721.1/158852</link>
<description>Underwater Semantic Simultaneous Localization and&#13;
Mapping
Singh, Kurran
Building semantically meaningful object level maps of underwater environments is crucial for enabling higher-level autonomy, fostering human-robot collaboration, and providing compressed map representations for bandwidth-constrained underwater communications, while localizing against such maps can improve the positioning accuracy of underwater vehicles by correcting for odometric drift. However, underwater semantic simultaneous localization and mapping (SLAM) has lagged behind analogous terrestrial and aerial semantic SLAM techniques largely due to the lack of large labeled underwater datasets and the challenging sensor modalities specific to underwater environments. To address these shortcomings, this thesis develops a range of methodologies to advance underwater semantic SLAM capabilities. &#13;
&#13;
First, self-supervised learning and visual foundation models are leveraged to detect and segment underwater objects in an open-set manner, i.e., objects need not be present in the training dataset to be detected. The machinery of the open-set object detection technique breaks several assumptions made by existing closed-set semantic SLAM methods. Thus, new methods for object representation and data association are proposed and demonstrated. A method to localize underwater objects is then developed through an analysis of the geometry of underwater monocular cameras and multibeam sonars. &#13;
&#13;
Finally, a formulation of open-set object-level place recognition as a graph matching problem is introduced. The formulation includes a method for calculating and tracking semantic uncertainty for open-set object detections. Experimental results on both underwater and terrestrial datasets demonstrate that the proposed formulation can be used for real-time accurate open-set object-based place recognition. &#13;
&#13;
In summary, techniques for underwater object detection, localization, and data association are introduced and integrated with probabilistic graphical models for open-set semantic SLAM. The proposed techniques are tested across a wide variety of scenarios, and are shown to generalize to terrestrial settings as well.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158852</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Testing of a Hovercraft with Electroaerodynamic Propulsion</title>
<link>https://hdl.handle.net/1721.1/158851</link>
<description>Design and Testing of a Hovercraft with Electroaerodynamic Propulsion
Quiram, Matthew
Electroaerodynamic (EAD) multistaged ducted (MSD) thrusters are a novel solid-state thruster architecture that has been shown to provide order-of-magnitude improvements in thrust density compared to single-stage EAD thrusters. This makes MSD thrusters well-suited for use in EAD hovercraft, where generating sufficient pressure is crucial for hovering. This study explored the feasibility of a wire-to-airfoil corona discharge MSD thruster powered hovercraft through a scaled-down prototype and final design. The hovercraft was tethered to a ground-based power supply and carried a payload mass to simulate having on-board power electronics to limit the scope of the project. The design of an EAD hovercraft involved applying the principles of hovercraft lift to a design optimization that implements the recently developed EAD MSD thruster model. A hovercraft prototype was designed and constructed to validate the models applied during the design phase and to test hovering capabilities without a payload. Using the manufacturing lessons and insights gathered in the prototype testing, a full-scale model was designed and built to hover while having an additional payload capacity that would be representative of a set of power electronics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158851</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organizational Forms and Practices: Essays on Implications for Frontline Workers and Performance</title>
<link>https://hdl.handle.net/1721.1/158850</link>
<description>Organizational Forms and Practices: Essays on Implications for Frontline Workers and Performance
Scott, Karen MacKenzie
In three essays, this dissertation explores how organizational forms and workforce practices shape frontline work experiences and organizational performance. Using both quantitative and qualitative methods, I explore how frontline workers experience work and what factors shape their performance. In the first essay, I examine how workforce practices in nursing homes relate to organizational performance. Specifically, I evaluate performance on resident health outcomes for both pre-pandemic and COVID-19 conditions. Combining Federal and state administrative data sets with non-public data on early COVID-19 spread and mortality, I investigate the degree to which the organization of work for frontline workers predicted resident health as a measure of organizational performance for nursing homes. In a period of global stress on health and care systems, I seek to understand to what extent prepandemic predictors of performance remained important. When nurses spent more time with residents, residents experienced better care both before and during the pandemic. Yet contrary to expectation, the role of clinical outsourcing became more relevant during the pandemic, potentially reflecting greater workforce flexibility or targeted COVID-19 workforce support to facilities that outsourced nursing activities before the pandemic. These results depict how environmental changes and alternative performance measures call into question established relationships in the high-performance work systems literature. In the second essay, I use in-depth interviews and field observations to uncover the process of constructing ownership culture in an employee-owned firm. I demonstrate how workers co-create their own control system, supported by a high financial value of ownership, strategic managerial communication, peer pressure, and performance management. This critical case challenges the dominant view in the employee-ownership literature that success requires formal worker participation in decision-making. Further, it investigates the “black box” of culture-building in an employee-owned firm. The third essay builds on this understanding by evaluating the stated motives of individual worker-owners in a home care cooperative. The cooperative developed as a pilot initiative with non-profit partners to develop a home care organization that would provide quality jobs and quality care, while integrating immigrant workers. I traced the workers’ justifications for joining and participating in these cooperatives. Rather than aligning with expected motives from previous studies or with Worker Center motives, I find that these workers adapted motives to reflect their realities, such as multiple jobs and a lack of labor rights in practice. This analysis emphasizes the decoupling of workers’ experiences from stated organizational goals, emphasizing the importance of collecting workers’ perspectives. Taken together, these three essays contribute insights into how frontline workers shape organizational performance by interpreting organizational context, culture, and structure. Results indicate that organizational performance is not merely a function of workplace practices, but rather, directly influenced by frontline workers based on their individual motives and roles in workplace culture. These findings imply that by directly engaging with frontline workers’ motives, organizational leaders and policymakers can design organizations that improve work and performance.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158850</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston</title>
<link>https://hdl.handle.net/1721.1/158849</link>
<description>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston
Proman, Zachary D.

</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158849</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Exploration of a Miniaturized Stirling Engine</title>
<link>https://hdl.handle.net/1721.1/158848</link>
<description>Design Exploration of a Miniaturized Stirling Engine
Hee, Ryann
Increased interest in long-term space exploration has increased demand for small yet powerful energy sources, especially for remote and harsh environments where traditional power sources may be impractical. In such scenarios, space probes and high-reliability systems necessitate innovative solutions to meet their growing power and thermal management requirements while maintaining small form factors. Presently, micro power systems fall short of achieving the desired efficiencies for these applications, typically hovering around 2% [1]. Stirling engines, with their proven capability to attain high thermodynamic efficiency (30-40%), offer a promising solution if this efficiency can be maintained in a miniaturized form [2]. This study delves into the design space of a miniaturized Stirling engine with a target input of 2Wth, which could be tailored for small-scale (mesoscale ~cm3 ) high-efficiency power generation or micro-cooling. Previous research has laid the groundwork for understanding the thermodynamics of miniaturized Stirling engines, exposing substantial challenges, including overwhelming parasitic losses at this scale. The current study endeavors to mitigate these losses and explore the path to optimal efficiencies through Simulink modeling. Simulations have demonstrated design spaces capable of producing mechanical efficiencies as high as 14% with a 2Wth input, marking significant progress in addressing the limitations of current micro power systems. The research's innovative approach has significant implications for enabling the power generation required for small space probes, particularly for long durations and need self-sustaining power over extended periods [3], [4]. As the study advances, it holds the promise of developing a physical prototype using the findings from the design space study, which helps push the field forward for future power generation and micro-cooling in small-scale space technology. This thesis aims to map the design space of a miniaturized Stirling engine focusing on mitigating parasitic losses to achieve markedly greater efficiency compared to existing technologies.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158848</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a cam and follower linear actuator for satellite optical systems</title>
<link>https://hdl.handle.net/1721.1/158847</link>
<description>Design of a cam and follower linear actuator for satellite optical systems
Brown, Darrell
Optical systems for satellites are used to image and track the physical environment of earth from space. Where the optical system images can be controlled through the rotation and movement of the optical system. Optical alignment is achieved though linear actuators, which constrain different degrees of freedom of the optical system. Optical systems require precise alignment, meaning the linear actuators that align them must have precise resolutions. During satellite launch, the satellite experiences both high acceleration and large magnitude vibrations, &#13;
which can damage equipment. Common precision actuation methods cannot meet the high stiffness required for these satellite linear actuators. A cam and follower linear actuator was &#13;
designed to fulfill these stiffness and precision requirements. Through modeling the dynamic and kinematic interactions between the cam and follower, a cam shape was designed, and necessary materials were chosen. Next through analysis of process capabilities of available &#13;
fabrication tools, manufacturing methods for different parts were selected. Finally, using components designed for testing, kinematic tests were conducted on the linear actuator. Testing &#13;
of the actuator demonstrated it was capable of actuating with a precision of 9.15 microns. More testing is needed to understand the stiffness of the device.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158847</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Speech Therapy</title>
<link>https://hdl.handle.net/1721.1/158846</link>
<description>Speech Therapy
Hintikka, Kathleen
Words can hurt. Most theorists who are working on speech and harm, even across disciplinary lines, agree to that much. There is less agreement, however, regarding the mechanisms by which speech causes or constitutes harm. By way of paying special attention to the dangers of ordinary, and to varying degrees, “socially acceptable" language, my dissertation, Speech Therapy, is a three-part exploration of the ways that speech plays significant roles in constructing and maintaining unjust conditions of domination and subordination both between social groups and within the broader society, even when its effects are unintended or go unnoticed. The first paper modifies a Gricean account of implicature to accommodate implicature in the interrogative mood. I then argue that interrogative implicature can help us make better sense of certain kinds of common microaggressions. In the second paper, I focus more explicitly on the kinds of harm and subordination that speech can cast on its targets even when the target is not around to hear it. I argue that all socially significant speech — speech about race, class, etc. — articulated from any standpoint, contain second-personal vocative hails. That is, all socially significant speech, even that not uttered second-personally, contains a second-personal norm that implicates both the speaker and members of the targeted social category, even when no member of the targeted social category is invited as an interlocutor by the speaker. The resulting view is a non-ideal, intersectional, and situated approach to second-personhood. The final paper is about how the things we do with language can alter the epistemic landscape of our communities. I argue that slurs, pejoratives, and misused epithets, a class of terms that I will refer to as demeaning speech, constitute a specific kind of epistemic oppression. My view is not that demeaning speech causes epistemic oppression, but rather that demeaning speech constitutes epistemic oppression. The oppression occurs in the mere uttering of these terms; the act of making it such that someone might have their testimony discredited in the future, of inflicting epistemic risk, is itself an epistemic injustice.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158846</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding options for the mechanical characterization of biological materials</title>
<link>https://hdl.handle.net/1721.1/158845</link>
<description>Expanding options for the mechanical characterization of biological materials
Varner, Hannah Martin
The mechanical properties of biological tissues change over time and with disease progression, and they provide important information regarding the limits a tissue can sustain before injury. Therefore, quantifying these properties in biological materials and their synthetic simulants could be instrumental for accurate medical diagnoses, treatment of disease, and prediction of traumatic injury survivability. Conventional methods of mechanical testing, such as uniaxial tension, compression, and nanoindentation,  provide highly repeatable and reliable results for the stiff materials for which they were originally developed. However, the same cannot be said when these methods are applied to the characterization of soft and biological materials due to limitations of specimen size, fixturing capabilities, and sample preparation. Volume Controlled Cavity Expansion (VCCE) is a recently developed technique to measure local mechanical properties of soft materials in their natural environment. Through the highly controlled expansion of a fluid bubble at the tip of an injection needle, paired with simultaneous measurement of the resisting pressure, a local signature of a material's mechanical response can be obtained. &#13;
&#13;
This thesis presents the first systematic application of VCCE to biological materials. It begins by presenting a cautionary example of the limitations of soft material testing, focusing on the synthetic silicone and tissue simulant polydimethylsiloxane (PDMS). We find that the wide range of mechanical properties  reported in literature are due to biases imparted by different testing methods. We then use VCCE to examine the elastic response of gelatin, whole blood clot and liver tissue, demonstrating with high repeatability that subtle mechanical changes occur within a matter of days as these tissues age. Finally, this work applies VCCE to investigate what happens to these materials after elastic expansion, and throughout a process of controlled damage. Biological materials are found to demonstrate toughening  that does not appear  in gelatin and PDMS. Because of these observed differences, we caution against using gelatin and PDMS for simulating the behavior of biological materials in extreme loading cases. Combining these findings, this thesis provides evidence that  more widespread adoption of VCCE in mechanical testing would provide a path to better understanding of the mechanics of soft and biological materials, with implications in fundamental mechanics research as well has in biological and healthcare applications.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158845</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Missing Megawatts Problem: Improving Modelling Practices to Prepare for an Uncertain Future</title>
<link>https://hdl.handle.net/1721.1/158844</link>
<description>The Missing Megawatts Problem: Improving Modelling Practices to Prepare for an Uncertain Future
Bhatt, Nirmal K.
Long-term energy system planning is one of the most pressing challenges for the power sector, which must maintain reliability while decarbonizing. Currently, no unified regulatory, modelling, or market framework exists in the United States to facilitate planning in pursuit of a clean and reliable grid. Variable renewable energy (VRE) generation can produce cheap power but they increase the grids exposure to interannual variability in demand and VRE generation. This raises questions about how grid planners will value VRE and clean firm power (such as nuclear power). This thesis evaluates the importance of considering interannual variability and clean firm power in long-term energy system planning. I use GenX, an open-source capacity expansion model, to model the U.S. New England region in 2050 assuming a high degree of electrification and various technology availability and emissions reduction pathways. I find that clean firm power will reduce the cost of decarbonizing the New England grid but that grid planners must consider decades of weather and demand data if they are to make appropriate investments. I also present a novel outputs-based timeseries clustering method which allows models like GenX to optimize grids using longer timeseries of weather and demand data. Based on my work, I recommend that policymakers, grid operators, and market designers establish rigorous standards around energy modelling for long-term planning that includes multiple scenarios and appropriately values technologies such as firm power.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158844</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Order Immersed Finite Difference Methods for Complex Domains with Moving Boundaries and Interfaces</title>
<link>https://hdl.handle.net/1721.1/158843</link>
<description>High Order Immersed Finite Difference Methods for Complex Domains with Moving Boundaries and Interfaces
Gabbard, James
Moving domain boundaries and material interfaces are a hallmark of multiphysics systems such as fluid-structure interaction, alloy solidification, and multiphase flows. Simulating moving interfaces with traditional techniques requires a moving mesh that continuously adapts to the interface, which is costly and places restrictions on the interface motion. Immersed methods avoid these challenges by simulating moving geometries on a stationary Cartesian grid, locally altering the numerical method to account for boundaries and interfaces that are not grid-aligned. Most existing immersed methods have low-order spatial accuracy, requiring fine grids to generate accurate results. High order immersed methods can produce more accurate results at lower resolution, making them a promising tool for 3D simulations with tight error tolerances. However, the majority of available high order immersed methods have been numerical experiments developed for stationary 2D geometries and simple PDEs. In this thesis we demonstrate that high order immersed methods can be extended to complex nonlinear PDEs and moving 3D geometries, both of which are necessary to simulate practical engineering problems. We begin by introducing a boundary treatment that locally approximates PDE solutions with high order accuracy using a weighted least-squares fit, and show that the procedure remains valid for smooth 2D or 3D geometries satisfying a local curvature constraint. This boundary treatment is combined with a high order finite difference method to discretize the Poisson equation with up to sixth order accuracy. We then expand the scope of the method to include PDEs with immersed material interfaces, spatially-variable coefficients, vector-valued unknowns, cross-derivative terms, and nonlinearities. These techniques are applied to generate a sixth-order discretization of 2D nonlinear elasticity, demonstrating the applicability of high order immersed methods to complex PDE systems relevant in mechanical engineering. In the second half, we focus on large-scale 3D simulations with moving boundaries. We construct a third order immersed advection discretization with provable stability in one dimension, and show experimentally that the scheme remains stable in 2D and 3D domains. To treat moving boundaries, we introduce a general framework that allows high order immersed methods to maintain their accuracy in both space and time when paired with any explicit Runge-Kutta time integrator. We conclude by presenting results from massively-parallel high order simulations of the 3D advection-diffusion equation with moving boundaries on a multiresolution grid. Taken together, these results demonstrate that high order immersed methods can achieve the scale and complexity necessary to enable practical simulations that are difficult or impossible with traditional mesh-based techniques.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158843</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Floor Plan Design Collaborator: A Data-Driven Approach to Assist Human Architects in Design Exploration</title>
<link>https://hdl.handle.net/1721.1/158842</link>
<description>Floor Plan Design Collaborator: A Data-Driven Approach to Assist Human Architects in Design Exploration
Sung, Woongki
After a long AI winter since the 1980s, artificial intelligence is now experiencing a renaissance due to enhanced computing power and access to vast amounts of data. Today, machines can talk, sing, and draw like human experts. Despite this progress, we are still far from the vision where human designers and AI collaboratively discuss and develop designs. This study argues that a data-driven approach holds great potential in the design process by quickly learning from existing examples and generating new alternatives for exploration. To support this claim, the study presents a generative framework that learns from existing examples and generates new designs. Specifically, the proposed framework employs Bayesian networks to encode site layout data and floor plan examples, generating new design examples through a Markov Chain Monte Carlo (MCMC) sampling procedure. Experiments on real-world examples demonstrate that the framework effectively summarizes the statistical information of given design examples and generates unseen examples based on the learned knowledge. The transparency of the data representation and the inner workings of the proposed framework facilitate an active feedback loop in the iterative learning and generation process between human designers and machines. Observations throughout the study reveal intrinsic limitations and potential improvements of contemporary optimization-based approaches from the perspective of both lateral and vertical design development.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158842</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>If These Hills Could Speak</title>
<link>https://hdl.handle.net/1721.1/158841</link>
<description>If These Hills Could Speak
Bayowa, Tejumola
If these hills could speak, what would they reveal, and how would they express it? This central question guides this thesis, which examines three hills in the heart of Ibadan, Southwest Nigeria— each occupied by the ruins of colonial monuments. Before the construction of these structures, the hills served as sanctuaries, providing water, food, and safety. However, under British colonial rule, architecture was utilized to disrupt this harmonious relationship. Over the course of 50 years, three monuments were erected that mark Britain’s colonial imprint on the city: a neoclassical courthouse (1925), built to assert control over the central market; a 60-foot tower (1936), which displaced the surrounding forests; and a theater (1977), built during a time of national struggle for unity and identity. Today, at the foot of these hills, a community has forged a way of life within a broken system. By repurposing and subverting structures in ways their creators never intended, this community embodies a praxis and poiesis of adaptive creativity within the built environment. This process represents a transformative act of pidginization—a collective tactic for repair, resistance, and reappropriation in response to an ongoing, imposed socio-political order. For these hills to speak again, the ruins must be transformed. This thesis begins that process by applying acts of pidginization learned from below to the three ruins. It proposes their conversion through deconstruction and de-monumentalization, with the aim of fostering economic development, ecological restoration, and cultural production in the city.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158841</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Hing Travel Agency Fictional Archive of Disappearing Hong Kong</title>
<link>https://hdl.handle.net/1721.1/158840</link>
<description>On Hing Travel Agency Fictional Archive of Disappearing Hong Kong
Wu, Ina
Hong Kong, shaped by rapid transformation and precarious land ownership, is a city where erasure defines its urban landscape. Amid this flux, a place I once called home was demolished, prompting the question: “How can one return to a place that no longer exists?” This thesis explores the transformative potential of disappearance, reframing it as a generative force that creates space for imagination, resistance, and continuity. Through On Hing Travel Agency (OHTA), demolished buildings "travel" into fictional worlds, becoming vessels of memory and imagination. Rooted in Hong Kong’s literary tradition—where fiction resists erasure and archives aspirations—the project employs fiction as both a tool of preservation and a site for belonging. Fictional destinations, inspired by Hong Kong novels, such as The Permanent City (1959), The Floating City (1986), and The Vanished Cities (2010), reflect pivotal historical moments while offering pathways to reconcile personal loss and master alternative spatial logics. The project culminates in the Lost Traveler’s Guide to Hong Kong, a publication curating maps, brochures, and layered narratives to immerse travelers in speculative thinking. By bridging the past and future, real and imagined, OHTA is a attempt to demonstrates how fiction can reclaim agency within the politics of disappearance, transforming loss into a catalyst for new narratives and creative engagement. Even in absence, Hong Kong’s disappearing spaces retain their resonance, generating new narratives and underscoring the creative potential of loss.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158840</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sweating Details: Labor of “Los Constructores del Valle”</title>
<link>https://hdl.handle.net/1721.1/158839</link>
<description>Sweating Details: Labor of “Los Constructores del Valle”
Andrade, Gabriel
“You should always be grateful for the work you can find, so make sure you prove you deserve it.”- Commonly heard growing up amongst the Builders of the Valley in Orange, NJ. The necessary attitude that fuels the built environment.&#13;
&#13;
This thesis proposes a dialogical method of tectonics through exploring the embodied experiences of those who physically build the city and its architecture, positioning architectural design as fundamentally tied to the labor that makes buildings possible. It centers on two primary questions: “Who builds this architecture?” and “How does this design impact a builder’s occupational livelihood?”&#13;
&#13;
To challenge professional standards that perpetuate a disconnection between designers and builders, this thesis reconnects me, as a designer, with my educators from Orange, NJ. These individuals—professional construction workers—shaped my earliest understanding of the built environment and how to navigate it socially and professionally. Through this process, learning more about who they are, how they entered construction, and how the work has affected them over the years.&#13;
&#13;
This education with ongoing dialogue pushes towards future opportunities of working together, focusing on designing better for the act of building by prioritizing the physical, mental, and financial longevity of my Educators. The culmination of this research and communication is materialized through four architectural details within a workspace, designed to showcase my Educator’s expertise and affinities as professionals. These details reimagine occupational choreography, opening up for future workflows that think through both lessening and healing the musculoskeletal disorders that many builders face after years of laboring across the tristate area.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158839</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory Astrophysics Studies of Magnetized Collisionless Shock Precursors and the ³He³He Proton Spectrum at the OMEGA Laser Facility</title>
<link>https://hdl.handle.net/1721.1/158838</link>
<description>Laboratory Astrophysics Studies of Magnetized Collisionless Shock Precursors and the ³He³He Proton Spectrum at the OMEGA Laser Facility
Johnson, Timothy Mark
Laboratory astrophysics enables the study of astrophysical systems in the lab. There are broadly two types of laboratory astrophysics experiments: macrophysics and microphysics. Macrophysics experiments study a scaled down version of an astrophysical system while microphysics experiments create a small volume of matter with the same conditions as an astrophysical system. This thesis details work related to both macrophysics and microphysics laboratory astrophysics experiments. For the macrophysics contribution, collisionless shocks experiments were conducted at the OMEGA laser facility using the new gas jet platform. Collisionless shocks are shock waves formed through plasma processes when particle collisions are negligible. These shocks can form as bow shocks in the interaction between the solar wind and planetary ionospheres and can accelerate charge particles to high energies. In the experiment, a CH plasma flow collides with a hydrogen gas jet plasma to create a forming magnetized collisionless shock. Different diagnostics show a moving density jump, strong magnetic fields, and the acceleration of electrons. These observations coupled with magnetohydrodynamics and kinetic particle-in-cell simulations paint a complete physical picture of the forming shock in a configuration similar to the bow shock of Venus. Late time proton radiographs show a complicated structure which is studied for magnetic turbulence. Turbulence is important in several astrophysical systems, especially collisionless shocks where it dissipates shock kinetic energy and is essential for accelerating charged particles to cosmic ray energies. Magnetic power spectra extracted from proton radiography data show a break in the spectrum between the ion Larmor radius and the ion skin depth for high plasma β, a sign of kinetic turbulence. Large scale particle-in-cell simulations of high β turbulence also have this feature showing that the experimental data are consistent with high β kinetic turbulence. For the microphysics contribution, a new proton spectrometer is designed for measurements of the ³He³He proton spectrum. The ³He³He fusion reaction is the last step of the proton-proton I chain which produces the majority of the sun’s power. Previous experiments were not able to measure the ³He³He proton spectrum below 6 MeV. A new proton step range filter (SRF) spectrometer with a larger energy range is designed using a Monte Carlo tool. This tool uses Geant4 and is able to self-consistently apply the instrument response function. The new SRF design is validated and a method for analyzing experimental data using the Monte Carlo code is presented.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158838</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shining a Light on the Nucleus: Photonuclear Measurements from Correlations to Charmonium</title>
<link>https://hdl.handle.net/1721.1/158837</link>
<description>Shining a Light on the Nucleus: Photonuclear Measurements from Correlations to Charmonium
Pybus, Jackson R.
The atomic nucleus is comprised of a collection of nucleons (protons and neutrons), which are bound together by the nucleon-nucleon (NN) interaction that originates from Quantum Chromodynamics (QCD). While most nucleons experience the force from the rest of the nucleus as a single net “mean-field” interaction that binds them relatively weakly, a small but impactful fraction are in configurations called “Short-Range Correlations” (SRCs), in which they pair with another nucleon at very short distance to experience strong interactions, significant binding, and high momentum. Hard, high-energy scattering reactions in which an SRC pair is broken apart, knocking both nucleons out of the nucleus, provide the ability to probe the details of these SRC configurations in the nucleus. Previous measurements have had limited statistics and kinematic reach, and the theoretical tools available were insufficient to draw quantitative conclusions regarding the ground-state properties of SRCs. The studies described in this thesis represent the first global analysis of SRC breakup measurements in order to present a unified picture of SRCs within light- to medium-size nuclei. This includes the use of a novel theoretical framework, the Generalized Contact Formalism, which connects scattering cross-section measurements and the ground-state properties of the SRC pair, to quantitatively interpret a variety of electron-scattering measurements. This is brought to culmination by a report on the first measurement of SRC pairs via the use of hard meson photoproduction reactions, which, despite differing significantly from the mechanics of electron-scattering, is well-described under a common framework, pointing to a consistent and universal picture of SRCs across reaction channels. I also report on the first measurement of J/ψ photoproduction in the near- and below-threshold kinematic region, giving the first insights to the gluonic structure of bound nucleons in the large-x “valence” region and providing constraints on a gluonic “EMC effect”. In addition to these studies, I provide details on the search for Primakoff production of axion-like particles using the photoproduction data taken for this experiment, and I conclude by describing studies of nucleon spin structure measurements that will be performed at the forthcoming U.S. Electron-Ion Collider.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158837</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Instrument for the Measurement of Soft Material Nonlinear Mechanical Response</title>
<link>https://hdl.handle.net/1721.1/158836</link>
<description>An Instrument for the Measurement of Soft Material Nonlinear Mechanical Response
Unikewicz, Brendan M.
Soft material research has seen significant growth in recent years, with emerging applications in robotics, electronics, and healthcare diagnostics where understanding material mechanical response is crucial for precision design. Traditional methods for measuring nonlinear mechanical properties of soft materials require specially sized samples that are extracted from their natural environment to be mounted on the testing instrument. This has been shown to compromise data accuracy and precision in various soft and biological materials. To overcome this, the Volume Controlled Cavity Expansion (VCCE) method was developed. This technique tests soft materials by controlling the formation rate of a liquid cavity inside the materials at the tip of an injection needle, and simultaneously measuring the resisting pressure which describes the material response. Despite VCCE’s early successes, expansion of its application beyond academia has been hindered by cost, size, and expertise. In response to this, the first portable, bench-top instrument utilizing VCCE is presented here. This device, built with affordable, readily available components and open-source software, streamlines VCCE experimentation without sacrificing performance or precision. It is especially suitable for space-limited settings and designed for use by non-experts, promoting widespread adoption. The instrument’s efficacy was demonstrated through testing Polydimethylsiloxane (PDMS) samples of varying stiffness. This study not only validates instrument performance, but also sets the stage for further advancements and broader applications in soft material testing. All data, along with acquisition, control, and post-processing scripts, are made available on GitHub.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158836</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acoustic scattering of spherical directional waves by smooth and statistically rough solid elastic cylinders</title>
<link>https://hdl.handle.net/1721.1/158835</link>
<description>Acoustic scattering of spherical directional waves by smooth and statistically rough solid elastic cylinders
Mursaline, Miad Al
Realistic sonars radiate spherically spreading waves and have directivity. Therefore, they insonify a target over a finite number of Fresnel zones and span a continuum of oblique incident angles, even when the center of the beam is at normal incidence. These effects strongly influence both the overall scattered pressure levels and resonances. For example, because of the spreading of the beam and associated oblique insonification within the beam, normal modes associated with axially propagating guided waves are excited that would not have otherwise existed for an idealized incident plane wave. This thesis analyzes acoustic scattering by solid elastic cylinders insonified by realistic sonars both theoretically and experimentally. A theoretical model to predict scattering by arbitrary-length cylinders is derived based on the apparent volume flow accounting for the above-mentioned practical sonar properties, namely, spherical spreading and directionality. The formulation is first bench-marked against the formally exact T-matrix solution and tested against previously published laboratory data for finite cylinders. It is found that the formulation outperforms the T-matrix solution in predicting laboratory observations at near-normal incidence. Laboratory experiments are then conducted on arbitrary length smooth cylinders insonified by a directional sonar, with a small number of Fresnel zone excited, to evaluate the theory for monostatic as well as bistatic geometries. The formulation is found to outperform the classical scattering models in predicting the new measurements. For example, resonances associated with axially propagating guided waves excited at broadside incidence observed in the experiments are predicted by the proposed formulation but not by the classical models. The measurements are found to agree well with predictions in terms of overall scattering levels and resonance locations. In addition to testing the predictions, the bistatic laboratory observations presented herein substantiate the significant effects on scattering due to the properties of the incident field from practical sonars. The comparison between theoretical and experimental results is then extended for the more complex case involving statistically rough elastic cylinders with one-dimensional Gaussian roughness. The roughness is found to have a considerable impact on all aspects of scattering—overall levels as well as locations and shapes of resonances. General agreement is found between the theoretically predicted and measured ensemble averaged scattered pressure. Both the theory and data reveal two main observations in the ensemble-averaged scattered field: overall scattered pressure levels are seen to decrease, and resonance effects are diminished compared to the corresponding case of smooth cylinders. Effect of various statistical properties of the rough cylinder, namely, different root mean square (RMS) roughness for fixed correlation length and different correlation lengths for fixed RMS roughness on the scattered field are investigated. Finally, the fluctuations of the scattered field are analyzed using the derived formulation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158835</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Dynamics of Diversity, Equity, and Inclusion Practice Adoption</title>
<link>https://hdl.handle.net/1721.1/158834</link>
<description>The Dynamics of Diversity, Equity, and Inclusion Practice Adoption
Yadama, Aishwarya Pandey
Despite the widespread adoption of Diversity, Equity, and Inclusion (DEI) initiatives in corporate America, significant disparities persist in the representation, compensation, and treatment of women and racial minorities. This paper investigates why well-intentioned DEI efforts often fail to achieve their intended outcomes and identifies managerial barriers to progress. This research employs a qualitative dynamic modeling approach to analyze the complexities of DEI practice implementation within organizations. I conducted a scoping review, focusing on longitudinal and experimental designs to identify key mechanisms influencing the outcomes of DEI practices. The interplay between organizational processes and individual cognitive and behavioral responses can be illustrated via reinforcing and balancing feedback loops that I map onto a causal loop diagram, which reveals how DEI initiatives interact with existing organizational processes and cultural dynamics. This paper introduces a dynamic perspective on DEI practice implementation, highlighting the feedback mechanisms that can either hinder or facilitate progress toward diversity goals. The model reveals that certain DEI practices may inadvertently trigger reinforcing loops that perpetuate inequality. By mapping DEI practices and their effects, this study provides a framework for understanding how DEI outcomes can diverge significantly depending on different implementation strategies. It underscores the importance of considering the endogenous feedback effects of DEI initiatives and offers insights into strategic interventions that can disrupt undesirable reinforcing cycles and promote progress toward organizational diversity, equity, and inclusion.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158834</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precisely Loose: Unraveling the Potential of Particles</title>
<link>https://hdl.handle.net/1721.1/158833</link>
<description>Precisely Loose: Unraveling the Potential of Particles
Yoon, Jeonghyun
Random, irregular, erratic, arbitrary, unspecifiable, and unpredictable—particles. In a post-extractive future, our reliance on standardized materials, continuously sourced through the exploitation of raw resources, will no longer be sustainable. Instead, architecture will increasingly contend with materials that defy standardization. This thesis focuses on these non-normative materials—particles, encompassing construction demolition debris, manufacturing defects, naturally occurring gravels, and locally sourced mineral waste. Ubiquitous yet underutilized, these materials hold potential not only for use, but also for reuse. However, they are often dismissed as rigid and unpredictable ingredients that require precise manipulation and cumbersome processing in order to achieve predictable results. What kind of architecture could emerge if we embraced the inherent nature of these particles, not as rigid materials to be controlled, but as dynamic, fluid entities? By embracing their uncertainty as a generative design agent, how would design approaches and construction processes transform? This thesis presents a catalogue of precisely loose methods for engaging with particles. These methods offer an alternative design approach that moves beyond the obsession with refinement and control over material behavior. By pouring, pushing, reconfiguring, and containing—in lieu of identifying, cutting, placing, and stacking—this series of interactions explores the potential of plurality, investigating how loosely controlled particles can adapt to collaborative construction processes. In doing so, this thesis redefines architectural material culture rooted in rubble, offering a framework to reimagine our relationship with the irregular, the unpredictable, and the overlooked.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158833</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Agent Hybrid Prediction in Autonomous Driving</title>
<link>https://hdl.handle.net/1721.1/158832</link>
<description>Multi-Agent Hybrid Prediction in Autonomous Driving
Yau, Tiffany Yee Kay
In autonomous driving, the hybrid task of predicting both high-level actions and lowlevel trajectories of human behaviour is fundamental to safe downstream decision-making. Much of the existing work in behaviour prediction tackle this problem without sufficiently modelling agent-agent interactions, limiting their ability to capture the full range of possible joint outcomes. Another key challenge in multi-agent prediction is the intractable prediction space that grows exponentially in the number of agents and duration of the prediction horizon. As a result, scalability is a major challenge. This thesis presents two approaches to address these challenges in multi-agent hybrid prediction. In our first approach, we model interactions and address scalability by learning to factor the joint prediction distribution. We observe that agents do not interact with all other agents in the scene, but rather, there are groups that strongly interact. Therefore, we group agents and represent the high-level interaction outcomes of groups with discrete variables. We additionally assume that inter-group interactions are sparse and can be sufficiently represented with a directed acyclic graph. These assumptions enable us to factor the distribution into a product of factors, effectively reducing the prediction space, and providing an order in which to easily sample discrete values. We evaluate the performance of this method on a large-scale autonomous driving dataset and show that it exceeds prior methods in coverage of possible interaction outcomes by 24% to 48% on various multi-agent validation data splits, while maintaining state-of-the-art prediction error. Our second approach represents agents in a traffic scene as a set of concurrent hybrid models and assumes a collision avoidance model of interactions, rather than learning the model from data like the first approach. Our method begins enumeration based on a simpler collision-agnostic prior distribution. Based on our factored representation, we determine the next best assignment to the prior. We extract bounding conflicts to correct the prior and increasingly reduce the error between the distribution used by enumeration and our collision-aware posterior distribution. Our experiments show that enumeration using A* with bounding conflicts (A*BC) is faster than A* and is therefore better at addressing scalability. In terms of prediction metrics, we find that our collision-aware posterior performs worse than the collision-agnostic prior and suggest future directions for improvement.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158832</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a computational tool and dynamometer for optimizing variable-speed centrifugal pump selection for a containerized, direct-drive photovoltaic electrodialysis desalination system</title>
<link>https://hdl.handle.net/1721.1/158831</link>
<description>Development of a computational tool and dynamometer for optimizing variable-speed centrifugal pump selection for a containerized, direct-drive photovoltaic electrodialysis desalination system
McWhinnie, Muriel A.
This thesis presents an optimized centrifugal pump selection methodology to improve the hydraulic efficiency of MIT’s Global Engineering and Research (GEAR) Center’s containerized, direct-drive photovoltaic electrodialysis desalination system capable of producing up to 300m3 of potable water per day. The novel flow-commanded current control scheme of this containerized desalination plant (CDP), which enables its minimal energy storage, also means that the centrifugal pumps used are operated at variable speeds to respond to the solar irradiance. Unfortunately, centrifugal pumps are typically designed for fixed operating conditions, and manufacturers often only report pump performance at their rated frequency. By estimating the hydraulic resistances of the CDP and testing potential pumps on a redesigned dynamometer, a MATLAB-based tool was developed to quickly and iteratively characterize pump performance at their expected operating points in the CDP. A&#13;
"Compatibility Factor" metric, defined by the normalized area under a pump’s efficiency-flow curve at its operating points, was devised to quantify each pump’s efficiency across the entire operating range of flow rates achievable under the CDP’s system constraints. Using this methodology, two 7.5 kW pumps were selected per diluate and concentrate channels to the electrodialysis stacks for alternate operation use. Following testing pumps on a dynamometer, this work outlines a methodology for characterizing a pump’s variable-speed efficiency at its operating points in any modeled system. This approach facilitates informed pump selection for the CDP to increase its water production and reduce its specific energy consumption, with an estimated improvement in hydraulic efficiency from 10% in GEAR Center’s previous system to over 30%. Overall, this work is applicable to various photovoltaic pumping systems aiming to reduce carbon emissions through variable-speed operation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158831</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of Environmental Regulation on Data Center Valuation</title>
<link>https://hdl.handle.net/1721.1/158830</link>
<description>Impact of Environmental Regulation on Data Center Valuation
Lee, Donghyun
Artificial intelligence has become one of the defining trends of modern society, with applications spanning virtually every industry. This societal shift has also influenced the real estate landscape. While data centers have existed for decades, it is only in recent years that they have garnered significant attention, demonstrated by their strong rent growth and compressed cap rates.1 Along with the attention over data centers, there also has been extensive research on how data centers impact the environment, such as "Quantifying the Sustainability Impact of Data Center Availability" by Manish Marwah et al. which present how data center power architecture may impact the environment and "The Environmental Footprint of Data Centers in the United States" by Md Abu Bakar Siddik, Arman Shehabi, and Landon Marsto. This research delves into quantifying the environmental impacts of data centers, specifically focusing on carbon and water footprints. However, what remains unexplored is how environmental regulations influence the valuation of data centers as a distinct real estate property type. This thesis examines how data center valuations could be impacted if existing environmental regulations were applied to regions where data centers are concentrated. The findings reveal a complex dynamic: while penalties under these regulations would reduce net operating income (NOI), potentially devaluing these assets, the same regulations would discourage new development, exacerbate the already constrained supply, and ultimately drive-up market rents for these properties. As a result, these opposing forces create ambiguity regarding the net impact of such regulations on data center valuations, with the outcome depending on which force prevails. What is clear, however, is that tenants would bear the brunt of these regulations, as landlords are likely to pass on increased costs through higher rents. On the other hand, while the environmental impacts of data centers and AI applications is critical to achieving sustainability goals, the societal benefits of AI solutions—ranging from advancements in healthcare to increased operational efficiencies—must also be considered. Balancing these competing priorities presents a unique challenge for policymakers and investors, with significant implications for the future of real estate and the digital economy.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158830</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Examining the Economic Impact of Anti-Warehouse Development Policies in California: A Case Study of the San Diego Market</title>
<link>https://hdl.handle.net/1721.1/158829</link>
<description>Examining the Economic Impact of Anti-Warehouse Development Policies in California: A Case Study of the San Diego Market
Ghasemlou, Peggy
This thesis conducts a detailed examination of the implications of anti-warehouse development policies in San Diego, focusing on their impacts on key economic indicators from 2024 to 2034. The research provides an overview of the U.S. industrial market, addressing crucial topics such as logistics market size, job creation, and the growth of e-commerce, while also exploring the NIMBY phenomenon and its influence on community opposition to developments, including a discussion of Bill 98 and its legislative implications. A specific focus on the industrial market in Southern California reveals important insights into job growth, rental rates, and market dynamics in San Diego. Through a comprehensive analytical approach, the study addresses the effects of development policies by presenting ten distinct scenarios that project delivery volumes, uncovering potential reductions ranging from 10% to 90% compared to a baseline scenario without restrictions. The analysis anticipates vacancy rates and job losses across various years, utilizing the LINEST function for forecasting key market indicators, including asking rents and asset valuations. Additionally, the research highlights the critical importance of logistics categories and decarbonization strategies to meet net-zero goals, as well as contemporary warehouse design trends and transportation innovations. The conclusions drawn from this research emphasize the complexities of balancing community interests with economic growth and sustainability in the region, as well as the broader economic implications of restrictive development policies on San Diego's warehouse industry, which could adversely affect the economic vitality of the warehouse sector.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158829</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Markers</title>
<link>https://hdl.handle.net/1721.1/158828</link>
<description>Dynamic Markers
Ortiz, Evan
When I was a child, I was certain that all clouds came from New 	Jersey. After passing through the Lincoln Tunnel, I-95 would gradually ascend, lifting our car to eye level with the billowing clouds emerging from beneath us. These clouds rose from the Meadowlands, a great marsh just two miles west of Manhattan, a landscape that has become defined by the infrastructure that occupies it. Nearly equal in land mass and opportunity to Manhattan, this landscape managed to resist holistic transformation due to our inability to control its water. Rather than becoming a prosperous site for agriculture in the 19th century, or the next metropolis in the early 20th, the Meadowlands fell out of focus and became a site to absorb the infrastructural networks needed to uphold rapid development at its edges.&#13;
&#13;
The Meadowlands was sutured shut by the networks interlaced through it in an attempt to erase the failures of the past. Utilizing this landscape as an urban sponge neglected that the marsh hosted a series of ecological infrastructures of its own. The Meadowlands' soft, uncertain ground once managed variations in the water level, but the draining of the ground that came with development reduced its capacity, making pump stations essential for managing water in inhabited areas. Unlike the other forms of infrastructure in the Meadowlands, the presence of the pump station is subdued, its invisibility upholds the illusion that the developments within this landscape are not threatened by their surroundings. However, steady sea level rise and an increase in storm surges have caused these pumps to fail, pulling the veil on their existence and more importantly, the essential role they play in our continued occupation of this landscape. The urgent need to increase the capacity of the pump station provides an opportunity to reconsider their agenda.&#13;
&#13;
This thesis proposes the Dynamic Marker, a new type of infrastructure that redefines the relationship between human systems and ecological flows. Grafted onto existing pump stations in the Meadowlands, it releases water as mist from 800 feet in the air, transforming the hidden mechanics of water management into a moment of wonder. The Dynamic Marker fosters microclimates and ecological connections, transforming infrastructure into a dynamic process that evolves with its surroundings. Over time, it becomes both a memorial to the marsh and a provocation for the future, inviting a rethinking of infrastructure as a participatory and adaptive force that responds to its surrounding ecology.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158828</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>More than the sum of parts: deconstructing tissues in their spatial, temporal, and environmental contexts</title>
<link>https://hdl.handle.net/1721.1/158827</link>
<description>More than the sum of parts: deconstructing tissues in their spatial, temporal, and environmental contexts
Tzouanas, Constantine
The human body is composed of ~37,000,000,000,000 cells, exquisitely organized into tissues delivering emergent functions beyond individual cells’ capabilities (e.g., the brain’s seemingly-effortless computations, the liver’s wide-ranging chemical processing). In my PhD, I studied how healthy tissues arise from properties and interactions of constituent cells, and how disease outcomes stem from dysregulation of underlying cellular parts. 1) To study how cells’ spatial organization shapes tissue function, I created photochemistry tools to discover gradients in how immune cells combat cancer across a tumor’s core vs. periphery. 2) To then explore spatially-structured tissues, I turned to tuberculosis (TB) granulomas: just centimeters apart, the immune system can kill bacteria in one granuloma or permit years-long bacterial survival in another. Reconciling this paradox, I discovered that bacterial killing needs coordinated signaling across immune cells, but TB-permissive granulomas structurally remodel to inhibit TB spread at the expense of “walling out” immune cells. 3) Connecting disease to lifestyle exposures, I determined tobacco smoking increases TB risk via blood-to-lung migration of TB-permissive cells. 4) Intrigued by past stresses seeding future dysfunction, I studied similar themes in adaptations to high-fat diets, discovering tradeoffs where individual liver cells promote their own survival at the expense of reduced tissue function and increased cancer risk. Through these studies, I dissected tissues and diseases with unprecedented resolution via single-cell multi-omics and mechanistic perturbations, defining the parts, interactions, and causal regulators that underlie tissue (dys)function.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158827</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Manufacture of a Modular Continuous Unit Dose Pharmaceutical Lyophilizer</title>
<link>https://hdl.handle.net/1721.1/158826</link>
<description>Design and Manufacture of a Modular Continuous Unit Dose Pharmaceutical Lyophilizer
Burcat, Steven
Pharmaceutical lyophilization (freeze-drying) enables long term storage and simplified transportation for aqueous vaccines and protein formulations. Modern industrial pharmaceutical freeze-driers rely on large batch and open loop formulation processing, limiting supply chains and resulting in variable quality products. This work describes the design and manufacture of a modular continuous lyophilization machine for pharmaceutical production. Additionally, the scaling and design methodology outlined in this work enables the development of both smaller systems for laboratory testing and larger machines to fit the needs and requirements of individual facilities. This machine introduces three new technologies to the pharmaceutical freeze-drying process. The first innovation is a continuous flow lyophilization topology which separates the lyophilization steps spatially rather than temporally. This layout allows product to travel through the system in smaller batches for increased product uniformity and quality control. The second innovation is a weight-based sensor for monitoring residual water content. This sensor enables in-situ monitoring of product during sublimation, and it resolves mass measurements as small as 5mg. The third innovation is the implementation of a thermal shock method of inducing controlled nucleation. The convective cooling and spatial non-uniformity within the machine allow vials to experience a 40°C temperature drop in less than 30 seconds. This nucleation front starts on the vial walls, rather than at the top surface of the solution in the vial, potentially increasing the water sublimation rate during drying compared to current nucleation methods. The machine designed and built for this work integrates into modern factory processes and can be scaled from the lab bench to a production line. The manufactured prototype demonstrates improvements on the production rate, flexibility, and quality of existing machines.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158826</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and perception of sounds from physical interactions reveals auditory intuitive physics</title>
<link>https://hdl.handle.net/1721.1/158825</link>
<description>Synthesis and perception of sounds from physical interactions reveals auditory intuitive physics
Agarwal, Vinayak
Object interactions – collisions, scraping and rolling – create many of the sounds that we hear in the world around us. These sounds are generated via lawful physical dynamics. Anecdotally, humans possess some intuitive knowledge of the physical generative processes underlying sound production, but little is known about the extent and nature of this knowledge. This thesis characterizes the auditory perception of physical object interactions, making three main contributions. First, we develop realistic contact sound synthesis tools, in part via large-scale measurements of object acoustics. Second, we show that humans solve ill-posed problem of inferring of object mass and damping by using internalized knowledge of the distribution of object resonances. Third, we provide evidence for “auditory intuitive physics” in which human listeners derive physical information through sound, maintain it over time in object representations, and compare it across sensory modalities.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158825</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Green Herrings in a Yellow Room: A Counter Production of The Yellow Wallpaper</title>
<link>https://hdl.handle.net/1721.1/158824</link>
<description>Green Herrings in a Yellow Room: A Counter Production of The Yellow Wallpaper
Aulgur, Leanah Sloan
Charlotte Perkins Gilman’s The Yellow Wallpaper is a designer’s work of critical fabulation. Published in 1892, the short story follows an unnamed woman prescribed a “rest cure” by her husband, John. Confined to a room wrapped in gothic yellow wallpaper, the narrator becomes obsessed with its patterns. As her mind deteriorates, she sees a woman trapped behind the paper. This production reimagines Charlotte’s bedroom as not yellow, but green—a rich, vibrant green laced with the medium responsible for its provocative coloration: arsenic. The toxic pigment, invented in the late 18th century, induces bodily ailments, mental instability, and even death when used in textiles. Interiors threatened tenants with toxins as this green spread through 19th-century Europe before reaching New England and our narrator. Though known as an author and suffragette, Charlotte was first a designer. As a student in the inaugural class of the Rhode Island School of Design, she studied the arts just miles from the ports where the green pigment began its early residence. Her writing draws from arsenic publications, her scenes mimic medical case studies, and archives suggest she was aware of these toxic walls. This theatrical table reading positions the authoring of The Yellow Wallpaper within the simultaneous stories of the arsenic wallpaper. Why does the author mimic material traces of the green while redirecting her readers to the yellow? When does the color transition from literal to abstract? This work recontextualizes the foundational feminist text by unfabulating the story through design—questioning Charlotte’s literary misdirections and the public discourse surrounding the toxic color.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158824</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Marketplace Multiculturalism</title>
<link>https://hdl.handle.net/1721.1/158823</link>
<description>Marketplace Multiculturalism
Chowdhary, Harris
Picture Texas. No longer simply cowboys, footballs, and firearms, this land today is sustained by a daily choreography of cross-border commerce, managed by entertainment media turned handheld surveillance, and peppered with enclaves of immigrants from the world over. A contact zone where logistical and legislative apparati warp to serve consumer comfort, Texas today is the world tomorrow: forget the Alamo, it’s highways, tax-incentives, and backyard barbecue on the 21st century frontier. This thesis responds to a call for roadside service stations along a planned international tourist corridor in the Texas-Mexico borderlands with six interventions: a panoramic viewing tower disguised as a billboard, a sunken stadium for athletic agonism, a photovoltaic drive-in charging cinema, an international culinary incubator, a showroom for automated fulfilment, and a customs and border patrol welcome center. These structures are testing grounds for modes of relation and value exchange that edge beyond the outdated positivisms of globalization. They ask how architecture might produce new possibilities and publics by working within and taking advantage of contemporary systems of control. As tourist destinations, the stops suggest the nation’s true mythos lies not in static symbols but in choreographies of transaction and contact. Articulating in built form the dynamic processes that define a territory of sprawl, this proposal suggests that Texas’s most authentic monuments are the stops we make along the way.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158823</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning Beyond Crisis: The Promise of Insurgent Planning in Post-Disaster Mocoa</title>
<link>https://hdl.handle.net/1721.1/158822</link>
<description>Planning Beyond Crisis: The Promise of Insurgent Planning in Post-Disaster Mocoa
Osorio Botero, Juan Camilo
During the evening of March 31, 2017, a catastrophic landslide engulfed the Colombian city of Mocoa, killing at least 335 people in roughly thirty minutes. Seventy others disappeared, and over one hundred people were reported injured across 48 neighborhoods, and roughly 1,500 housing units were destroyed. With a total of 22,000 people impacted, this catastrophe was the deadliest disaster affecting Colombia in recent decades. Yet, despite an alignment of major national political commitments, international cooperation, and a multi-million-dollar humanitarian budget, reconstruction plans have not been completed seven years later. Why? As the first comprehensive analysis of the landslide and its aftermath, this dissertation is a novel investigation into the competing forces that ultimately canceled the central reconstruction plan, demonstrating that the kind of disruption caused by the disaster mobilized new actors and new forms of agency. In contrast to the popular perception that this kind of lack of remediation suggests the failure of urban governance, the dissertation speaks to the success of activists who have neutralized the government’s reconstruction plan, which activists perceived as worsening the circumstances leading up to both the catastrophe and recovery. Distinguishing between the “landslide” and the “larger disaster,” the dissertation further explains the government’s proposed reconstruction plan within a history of violent extraction, dispossession and displacement. Framing an original case consisting of fifteen planning vignettes to trace actions, reactions and counteractions, I expose the reduction of the planning process as crisis urbanism. My research contributes to our understanding of variability among insurgent planning actors and their invented spaces for engagement in the context of disaster, by defining technocratic resistance as a valid form of dissent inside the government, and by proposing a new device for the study of insurgent planning called transformative spaces enabling local community’s right to plan. Drawing on contemporary debates on anti-crisis, risk and decolonial thought, the dissertation imagines an alternative paradigm for planning beyond crisis that enables radical community action through dissenting grassroots leadership. &#13;
&#13;
Keywords: crisis urbanism, technocratic resistance, insurgent planning, regenerative planning, anti-crisis, risk, decolonial thought
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158822</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Identity-Oriented Systems Engineering Framework for Complex Sociotechnical Systems: A case study of Zero Robotics</title>
<link>https://hdl.handle.net/1721.1/158821</link>
<description>An Identity-Oriented Systems Engineering Framework for Complex Sociotechnical Systems: A case study of Zero Robotics
Zhang, Yiyun
Historical and ongoing discrimination of certain identity groups such as by racial, gender, social class, and other differences leads to persistent inequalities in various fields of society including socioeconomic, health system, political powers, education opportunities, etc. Technology however often entrenches or sustains the hierarchies and further strengthens these social inequalities. While there are many frameworks for studying complex systems, a framework with a focus on advancing social justice and an integration of technological and social considerations is missing. This work introduces the Intersectional Antiracist Technology Framework as a new tool and applies it to an existing complex system of Zero Robotics in STEM education. STEM education, with increasing importance in the modern world’s competitions, is one of the most popular methods to cultivate students’ interests and capabilities in solving complex problems. However, the disparities in access to quality STEM learning opportunities and inclusion in STEM activities remain significant challenges in promoting social equality. This work builds upon the systems engineering tools and uses the innovative Intersectional Antiracist Technology Framework to describe, explain, and evaluate an existing complex system of Zero Robotics. Zero Robotics is an education outreach program that is designed as an early intervention to enroll students in aerospace and related fields. The program aims to serve students across the pipeline and provide them with learning opportunities through interactions with a space robot. It is a perfect example of a complex sociotechnical system that has technological and social factors. Through the case study of Zero Robotics, data are collected through interviews, surveys, participant observation, and available documents. Qualitative program outcomes are assessed from student surveys before and after the Zero Robotics competition. This work is the first attempt to apply the Intersectional Antiracist Technology Framework to an existing complex system that is being managed by the author. The findings from this study demonstrate insights that can be gained about complex, sociotechnical systems by viewing them from multiple Stakeholder perspectives and blending the information about the technical and social design aspects.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158821</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improved Complexity Analysis for the Proximal Bundle Algorithm Under a Novel Perspective</title>
<link>https://hdl.handle.net/1721.1/158820</link>
<description>Improved Complexity Analysis for the Proximal Bundle Algorithm Under a Novel Perspective
Fersztand, David
The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with non-smooth components. We investigate its convergence rate in two settings. We first focus on a composite setting where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the Moreau envelope of the dual problem. In light of this correspondence, we first extend the linear convergence of Kelley's method on convex piecewise linear functions from the positive homogeneous to the general case. Building on this result, we propose a novel complexity analysis of PBA and derive a O (epsilon^-4/5) iteration complexity, improving upon the best known O (epsilon^-2) guarantee. This approach also unveils new insights on bundle management. We then present the first variant of the PBA for smooth objectives, achieving an accelerated convergence rate of O(epsilon^-1/2 log(epsilon^-1)), where epsilon is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of the PBA, which was previously posed in two recent papers. We interpret the PBA as a proximal point algorithm and base our proposed algorithm on an accelerated inexact proximal point scheme. Our variant introduces a novel null step test and oracle while maintaining the core structure of the original algorithm. The newly proposed oracle substitutes the traditional cutting planes with a smooth lower approximation of the true function. We show that this smooth interpolating lower model can be computed as a convex quadratic program. We finally show that Nesterov acceleration can be effectively applied when the objective is the sum of a smooth function and a piecewise linear one.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158820</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>thesis in the field of Chemical Oceanography: Marine iodine biogeochemistry: inorganic speciation, redox dynamics and organic complexation</title>
<link>https://hdl.handle.net/1721.1/158819</link>
<description>thesis in the field of Chemical Oceanography: Marine iodine biogeochemistry: inorganic speciation, redox dynamics and organic complexation
Ștreangă, Iulia-Mădălina
Iodine holds significant importance across various disciplines, including medicine, industrial processes, organic synthesis, paleoclimatology, atmospheric chemistry and modern climate science. The ocean, as a major surficial iodine reservoir and the primary source of this element to the atmosphere, plays a central role in global iodine cycling. Despite significant progress, key aspects of iodine cycling in the marine environment remain poorly understood. This thesis leverages recent advances in high-precision techniques, including liquid chromatography and mass spectrometry, to enhance our understanding of marine iodine biogeochemistry. Detailed analyses of the major inorganic iodine species in seawater, iodide and iodate, were conducted in the oligotrophic waters of the North Pacific and the oxygen minimum zones of the Eastern Tropical Pacific. The observed distributions reflect the impact of both in situ and ex situ processes on dissolved iodine concentrations, offering valuable insights into the prevalence and extent of anoxic conditions within oxygen minimum zones. Iodate formation rates were investigated through surface seawater incubations using iodide-129, a long-lived radioisotope, as a tracer. The experimental results underscore the pivotal role of particles in mediating redox transformations between iodide and iodate, while also emphasizing the significance of iodine species with intermediate oxidation states in these processes. Building on this observation, a significant focus of this thesis is the characterization of dissolved organic iodine in the ocean. Two innovative methodologies for identifying dissolved organic iodine compounds are presented. The first approach focuses on labelling cultures of the cyanobacterium Synechococcus with iodide-129 to generate a diagnostic isotopic pattern in resultant dissolved organic iodine complexes. The second approach employs sequential purification and isolation of a target compound from a large-volume seawater sample collected in the North Pacific. Collectively, the findings presented in this thesis significantly enhance our understanding of iodine cycling in the marine environment, offering novel insights into the distribution and composition of both inorganic and organic iodine, as well as the rates and dependencies governing iodine cycling processes. Furthermore, the methodologies introduced here pave the way for future research to elucidate the mechanisms driving iodine redox transformations in seawater, refine the marine distribution of inorganic iodine, and advance the molecular characterization of dissolved organic iodine.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158819</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Right and Left Ventricular Coupling for Optimization of Mechanical Circulatory Support</title>
<link>https://hdl.handle.net/1721.1/158818</link>
<description>Leveraging Right and Left Ventricular Coupling for Optimization of Mechanical Circulatory Support
Lamberti, Kimberly Kate
Mechanical circulatory support devices have the potential for profound impact on cardiogenic shock patients. They enable volume propulsion and pressure gradient generation first by unloading and later by decoupling native cardio-vascular interactions, which reduces cardiac load and energy consumption while increasing organ perfusion in the face of disease.  However, there is a potential price in that coupling evolved to optimize blood flow dynamics and the complex interplay between individual cardiovascular components and interposing organs like the lung. Disrupting native coupling with mechanical support risks decompensation if the heart and lung cannot tolerate these changes.&#13;
&#13;
One particularly concerning consequence of altered coupling is that upwards of 40% of patients with left-sided mechanical support face ensuing right heart failure, which requires urgent action and often is associated with even higher mortality rates. We hypothesized that better understanding of right heart function and the mechanisms of right heart (in)tolerance to left-sided support will improve device utility by aiding device selection as well as titration throughout a patient’s clinical course. In particular, we focused on right and left ventricular coupling, which consists of serial coupling across the closed-loop cardiovascular circuit, and parallel coupling that enables intracardiac interdependence and force transmission between the ventricles. Each interaction plays a critical role in a patient’s tolerance to mechanical support and optimal setpoint.&#13;
&#13;
We used a series of controlled porcine experiments to evaluate right and left heart coupling during mechanical support. In each set of experiments, we induced graded models of disease that range from health to progressive impairment, enabling evaluation of  mechanical support across a spectrum of right and left heart states. Through these studies, we improved mechanistic understanding of the differences between right and left heart function, and how those differences dictate the response to left-sided support. Specifically, we found that pulmonary vascular compliance enabled a unique right heart adaptability to varied flow, but limitations in compliance due to disease yield right heart intolerance to support. We leveraged the indwelling pump to dynamically alter load in the system, creating a method to rapidly evaluate pulmonary vascular compliance adaptability and therefore predict the need for right-sided support. Finally, we created a metric using device-organ interactions for tracking right-left coupling over time, which can aid optimization of device speed based on relative right and left ventricular volume setpoints. Translation of these findings to the clinic could better inform use of mechanical circulatory support technologies with the goal of improving outcomes for cardiogenic shock patients.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158818</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of Introducing Technical Design Elements in Makerspace&#13;
Trainings</title>
<link>https://hdl.handle.net/1721.1/158817</link>
<description>Impact of Introducing Technical Design Elements in Makerspace&#13;
Trainings
Barakat, Layal A.
Makerspaces are used as a tool in higher education to support curricular, hands-on projects and encourage student extracurricular and personal projects. Because access to making is more self-driven, there is a gap between what makerspace trainings teach students and what students are expected to know by the time they reach capstone courses in engineering. To test the effects of introducing a technical makerspace training to students, several steps were taken. First, known barriers to making were explored and organized into categories. Second, Design Expertise was defined as a means to combat these barriers: it is a combination of (1) knowledge, (2) skill, (3) perspective, and (4) motivation. Third, a rigorous framework, the Design-Fabrication-Performance (DFP) matrix was created to break down design expertise into manageable chunks. Next, existing makerspace trainings at MIT were characterized using the DFP matrix. Afterwards, the DFP matrix was used to design a new, experimental training which would incorporate engineering design thinking and expertise with the typical makerspace machine training structure. Finally, 23 student participants were recruited, surveyed using a Likert scale (1 = strongly disagree, 5 = strongly agree), and interviewed to understand the impact of the training on participant perspectives, engineering identity, and maker motivation. Initial results suggest that student self-efficacy increases as a result of the training, This outcome is shown by the highest average differential of all survey responses (M = 0.78, SD = 0.85) for question 15: “I am confident in my ability to use GIR level knowledge to design and make things that perform as intended”. The maker training reinforced the motivation to make things for a majority of students, with the average score for the associated question being 4.48 (SD: 0.85). The training also positively impacted some traditionally marginalized groups in STEM. For the statement "I feel comfortable in engineering at MIT", women averaged 3.27 and men 3.90 before the training. The average differentials in the post- and pretraining scores to this question for these groups were 0.4 and 0.91 respectively. The training also appears to level playing field for students with less advanced backgrounds in engineering and science. For the question “I am confident in my ability to solve GIR level problems on my own”, students with parents with graduate degrees or higher averaged 4.44 before the training, while those with parents with undergraduate degrees or lower averaged 3.57. The average differentials are 0.22 and 0.64 respectively. Although students saw the value in modeling systems before design and fabrication, several questions demonstrated that students found modeling to be tedious and preferred to test and iterate on their designs in the makerspace; further work is needed to eliminate barriers to sustain student interest and participation in the long term. A longitudinal study following these students would also be needed to reveal long term outcomes such as STEM retention and long-term makerspace usage.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158817</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Long-timescale Behavior of Positive DC&#13;
Streamer Coronas</title>
<link>https://hdl.handle.net/1721.1/158816</link>
<description>Investigation of Long-timescale Behavior of Positive DC&#13;
Streamer Coronas
Strobel, Lee R.
Positive DC streamers are filamentary low-temperature discharges that are relevant to many applications, including sterilization, ionic wind generation, agriculture and atmospheric electricity. Even when excited by a DC voltage, streamers in atmospheric-pressure air typically self-pulsate with a frequency of several kilohertz. The generally-accepted explanation for DC streamer self-pulsation is that it is driven by recovery of the electric field near the tipped anode, due to electrostatic removal of ionic space charge from the inter-electrode gap over inter-pulse timescales. However, this theory has not been validated, either experimentally or numerically. Most prior works investigating DC streamers have focused on the streamer propagation phase (a few tens of nanoseconds) - few have investigated longer timescales, including the bridging of the electrode gap by the streamer and the subsequent current pulse (hundreds of nanoseconds) and the period in-between streamer pulses, leading up to initiation of the next streamer discharge (hundreds of microseconds). The work presented in this thesis focuses on investigation of the longer timescales of positive DC streamer development in a tip-to-plane geometry, in particular beyond the streamer propagation phase, through the current flow and inter-pulse phases. This begins with an experimental study to measure the long-timescale development of the electric field inside a streamer corona using the E-FISH laser diagnostic technique. This shows some surprising results, which do not seem to be consistent with the theory of DC streamer selfpulsation being driven by electric field recovery at the anode. The near-anode electric field is not observed to recover during the inter-pulse period - instead, the near anode behavior seems to be dominated by a persistent glow discharge and a curious wave-like feature is observed in the electric field, traveling towards the anode on ionic timescales. This is followed by the development of a 1.5D reduced-order numerical model of a DC streamer, which is optimized for solving over long timescales via a ‘triple-stack’ of transient solvers. The model is able to fully resolve the boundary sheath layers of the plasma and is able to capture detailed behavior of the cathode sheath development during bridging via the use of a kinetic flux boundary condition for the charged species. This model is firstly applied to modeling the bridging and current flow phases of streamer development, and its prediction shows a good qualitative match to the behavior of the experimental current pulse. Parameter sweeps show that the streamer current pulse is sensitive to the assumed radial behavior and the rate of electron-ion recombination, but insensitive to the applied boundary conditions or secondary emission. The final section describes an extension of the 1.5D streamer model to simulate the streamer inter-pulse phase and initiation of a second streamer. It is shown that initiation of a second streamer can be predicted by a fluid model and that radial expansion of positive ions plays an important role; however, it has proven difficult to integrate that effect into the 1.5D model. The model results are consistent with streamer self-pulsation being due to electric field recovery; however, comparison with the results of the E-FISH experiment suggest there may be different mechanisms driving positive DC streamer self-pulsation, depending on the presence or not of a glow discharge on the anode.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158816</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabrication and Characterization of Horizontally Aligned&#13;
Carbon Nanotube Thermoplastic Bulk Nanocomposite&#13;
Laminates</title>
<link>https://hdl.handle.net/1721.1/158815</link>
<description>Fabrication and Characterization of Horizontally Aligned&#13;
Carbon Nanotube Thermoplastic Bulk Nanocomposite&#13;
Laminates
Lin, Yuying
Carbon nanotubes (CNTs) have advantaged mass-specific mechanical properties and excellent thermal and electrical conductivity, making them an attractive reinforcement for composite systems. Due to an increasing need for more sustainable materials, incorporation of CNTs into thermoplastic matrices presents a promising solution for recyclable and repairable polymer nanocomposites (PNCs). This thesis presents an approach to fabricating and characterizing thermoplastic PNCs that incorporate ultra-high volume fractions of horizontally-aligned carbon nanotubes (HA-CNTs). An MIT-developed bulk nanocomposite laminating (BNL) process was adapted to fabricate multi-ply, unidirectional composites with poly(methyl methacrylate) (PMMA) and acrylonitrile butadiene styrene (ABS) matrices. For the HA-CNT/PMMA system, the BNL process was tailored to fabricate 4-ply and 8-ply laminates with fiber volume fraction v_f &gt; 45 vol.%, using a 9 wt.% PMMA in anisole solution. Through characterization via X-ray microcomputed tomography (µCT), scanning electron micrography (SEM), thermogravimetric analysis (TGA), Fourier transform infrared (FTIR) spectroscopy, and polarized Raman spectroscopy, HA-CNT/PMMA laminates were shown to be free of micro-scale voids with weak or non-existent process-structure interactions, i.e., the CNTs had negligible effect on the polymer structure. TGA and IR helped demonstrate that the BNL process did not lead to decomposition or chemical changes to neat PMMA, and FTIR also revealed that the fabrication process did not induce covalent bonding between CNTs and PMMA. The crystalline behavior of PMMA was studied via dynamic scanning calorimetry (DSC) as well as X-ray diffraction (XRD), which demonstrated that BNL processing temporarily lowers neat PMMA glass transition temperature T_g by 4 ◦C with no permanent change after removal of thermal history. However, CNT inclusion leads to higher laminate T_g by 11 ◦C as shown through both DSC and dynamic mechanical analysis (DMA), which can be explained by CNT constraints on polymer chain movement as opposed to any crystallinity changes in the PMMA. Storage modulus of 8-ply HA-CNT/PMMA laminates was shown to be more than 600% of neat PMMA via DMA, while a decrease in tan(δ) of the laminate compared to neat PMMA indicates an increase in elastic behavior due to CNT inclusion. 4-ply laminates were subjected to a minimum radius of curvature test showing a ∼ 50% increase in yield strain compared to neat PMMA. Electrical properties of 4-ply HA-CNT/PMMA laminates were measured via 4-point probe testing, which demonstrated good Ohmic contact between CNTs, with conductivity of ∼ 2 × 10⁴ S m⁻¹ and anisotropy ratio of 1.2. A preliminary investigation was completed to evaluate the feasibility of using the BNL process for the HA-CNT/ABS system. Uniform suspensions of ABS in anisole were developed to use the BNL polymer infiltration method of spin-coating and vacuum-assisted infusion. It was shown that the nature of the ABS suspension led to uneven polymer distribution over the HA-CNTs. This work has demonstrated the successful incorporation of high volume fractions of aligned CNTs into PMMA thermoplastic matrices as well as the electrical conductivity of such composites, opening an avenue to the development of other high v_f thermoplastic PNCs and exploration into additional multifunctional capabilities.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158815</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Modeling of Biological Function</title>
<link>https://hdl.handle.net/1721.1/158814</link>
<description>Computational Modeling of Biological Function
Khodaee, Farhan
How biological function emerges from complex molecular patterns is a fundamental question in biology. Addressing this question requires a deep exploration of the concepts of genotype and phenotype, which serve as the foundation of this inquiry. This dissertation focuses on providing a quantitative approach through the lens of computation to dissect the dynamic relationship between genotype and phenotype. In particular, recent advancements in high-content genotyping methods, such as genome-wide association studies (GWAS) and single-cell RNA sequencing, have provided powerful tools for mapping the molecular basis of biological function, but also have introduced challenges due to the high dimensionality, vast combinatorial possibilities, and multimodal characteristics of the data. The overarching goal of this dissertation is first to provide a critical discussion on the theories of genotype and phenotype as they relate to biological function and propose new methods to map their relationship. Specifically, we present the integrated genetics framework designed to analyze and interpret the manifold of genotypes and their associated phenotypes simultaneously. We applied this approach to develop a multimodal foundation model for human transcriptomics at the cellular level. To further test the capabilities of this method, we apply it to dissect the aging process. The results of this study provide novel concepts and methods for analyzing the genetic data along with phenotypic information with higher resolution. Moreover, the results shed light on uncovered potential cross-tissue biomarkers that are undetectable through conventional gene expression analysis alone. Overall, this study aims to advance our understanding of the dynamic interplay between gene patterns and phenotypic manifestation and demonstrates the potential of computational modeling in uncovering new dimensions of cellular function and complexity.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158814</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracking carbon fluxes across ocean interfaces using dissolved gas observations</title>
<link>https://hdl.handle.net/1721.1/158813</link>
<description>Tracking carbon fluxes across ocean interfaces using dissolved gas observations
Traylor, Shawnee Nicole
The cycling and exchange of carbon between Earth’s systems play a pivotal role in regulating climate, yet two major carbon fluxes remain poorly constrained: the biological carbon pump (BCP) and carbon release from Arctic permafrost. This thesis focuses on dissolved gases as tracers and drivers of these processes through both autonomous and field-based observations. It encompasses (i) improvements to sensor-based measurements of O₂, (ii) the use of these measurements to assess the strength of the BCP in two distinct export regimes, and (iii) isotopic approaches to carbon dioxide (CO₂) and methane (CH₄) dynamics at a coastal permafrost site. The first part of the thesis is centered around the NASA EXPORTS campaign and studies the BCP at two contrasting field sites. Using autonomous platforms, carbon export was evaluated at both sites and demonstrated that at the lower productivity site, a greater proportion of fixed carbon was routed to sinking particulate organic carbon (POC), while the higher productivity site resulted in near equal proportions of dissolved organic carbon production and sinking POC. These findings underscore the value of autonomous sensors in capturing spatial and temporal variability in oceanic carbon cycling. The second part of this thesis shifts focus to the Arctic, where rapid warming threatens to mobilize vast (~1,500 Pg) amounts of carbon currently stored in permafrost. This study presents observations from the spring thaw at a coastal Arctic site and demonstrated that even sites with high CH₄ and CO₂ concentrations drew less than 10% of their carbon source from ancient permafrost sources. The variability in CH₄ and CO₂ emissions reflects the complex interplay between hydrological changes, primary productivity, and microbial processes. The research highlights the need for regular monitoring of Arctic rivers, which integrate changes in the terrestrial system, as a potential early warning system for abrupt permafrost thaw. This thesis leverages the fundamentals of dissolved gas geochemistry to examine key climate-relevant biogeochemical cycles across diverse environments that are sensitive to global change. These insights contribute to refining Earth system models and emphasize the need for expanded monitoring to predict future shifts in global carbon cycling and climate dynamics.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158813</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Lagrangian perspective of mesoscale biophysical interactions in the subtropical ocean</title>
<link>https://hdl.handle.net/1721.1/158812</link>
<description>A Lagrangian perspective of mesoscale biophysical interactions in the subtropical ocean
Jones-Kellett, Alexandra E.
The most kinetic energy in the ocean is at the mesoscale, which includes highly dynamic physical perturbations that persist for months, a biologically relevant timescale for phytoplankton growth and bloom development. Importantly, mesoscale currents and the associated biological responses (i.e., biophysical interactions) are not spatiotemporally static, so they are difficult to characterize. In this thesis, we interpret phytoplankton observations in an objective Lagrangian manner, or with a frame of reference that follows the motion of water parcels experienced by drifting organisms. We build a Lagrangian coherent eddy tracking algorithm that identifies the boundaries of water masses trapped for a month or longer. Using this tool, we assess the variability of the lateral advective properties of eddies across the North Pacific Subtropical Gyre, finding that only half of the remotely sensed eddies identified from the traditional, Eulerian sea level anomaly method trap waters for these timescales. We then statistically compare satellite-observed chlorophyll-a anomalies associated with eddies that trap versus mix across their boundaries. Lagrangian coherent vortices have more anomalous biological signatures in the gyre, so we argue that the role of leaky eddies in altering biogeochemistry may be underestimated due to lateral dilution. We also highlight substantial regional and seasonal variability in the dominant biophysical interactions within the oligotrophic regime, helping to explain inconsistencies of in situ eddy observations across this region. Lastly, we show how the Lagrangian water mass histories of in situ samples shape the phytoplankton community in the open ocean, quantified with amplicon sequencing and internal genomic standards. In non-eddy waters, we found that cyanobacteria are advantaged over eukaryotic phytoplankton when lateral mixing is minimized for several months. In or near mesoscale eddies, where vertical perturbations are a source of new nutrients, eukaryotic phytoplankton gene abundance has no dependence on the lateral mixing histories. The results suggest dispersal and niche generation drive phytoplankton variability but in different ways in and outside eddies. This thesis emphasizes how Lagrangian tools reveal mesoscale structures (otherwise invisible with Eulerian reference frames) that trap, transport, and transform ecosystems, generating phytoplankton patchiness and variability in the surface ocean.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158812</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remembering Energic Connectivities: Appropriate Technology and Domestic Infrastructure in the Energy Crisis</title>
<link>https://hdl.handle.net/1721.1/158811</link>
<description>Remembering Energic Connectivities: Appropriate Technology and Domestic Infrastructure in the Energy Crisis
Adornetto, Turner Day
The electric grid is a large, complex machine. And yet, it represents but one, narrow framework for energic relations. Visions for just and sustainable futures – for social and ecological repair – should wander further afield. One place they could go is home. In this essay, the Appropriate Technology Small Grants Program, an oft-forgotten chapter of U.S. energy history, shows us how small-scale, place-based inventors transformed homes and neighborhoods into converters and conductors of nearby flows and potentials. At the height of the energy crisis of the 1970s, these inventors pursued a distributed solution to shortage. Along the way, they re-wired the material and conceptual strictures of the modern dwelling and broke into a vast reserve of lowcost, renewable power. Home, they showed, was a workshop to understand and design energic connectivities. But tracing the effects of home-based appropriate technology leads us somewhere else – to the frontiers of energy extraction, where social justice activists proved that small-scale, place-based energy systems could replace unjust mines and dams. What emerged, then, through renewed attention to the possibilities for home and energy, was a powerful counter to the logics of sacrifice at both ends of the energy continuum. Today, as we chart our own response to crisis, it helps to remember how others tried to create solidarities and resist tradeoffs with small-scale, place-based infrastructures. We can, I think, do more with energy.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158811</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mining Multifaceted Customer Opinions from Online Reviews</title>
<link>https://hdl.handle.net/1721.1/158810</link>
<description>Mining Multifaceted Customer Opinions from Online Reviews
Mao, Chengfeng
Online reviews are a valuable source for studying customer needs and preferences. Previous studies focus on extracting a set of a priori defined constructs such as product attribute perception or explicit customer needs from reviews. Such a priori focus circumvents the limitations of certain natural language processing algorithms but discards valuable information in reviews that are not in the scope of the predefined construct. This study proposes a new method of extracting customer opinions and opinion targets from reviews with the Aspect Sentiment Triplet Extraction (ASTE) algorithm and then identifying theoretical constructs critical for product development with a posteriori interpretation method. We demonstrate the value of our proposed method by identifying granular opinion targets and expressions to find infrequent but important phenomena such as user innovations and delights.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158810</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geographies of Selective Surveillance: Analyzing the Lived Experiences of Street-Level Trans Sex Workers and Muslims in India through the Matrix of Domination</title>
<link>https://hdl.handle.net/1721.1/158809</link>
<description>Geographies of Selective Surveillance: Analyzing the Lived Experiences of Street-Level Trans Sex Workers and Muslims in India through the Matrix of Domination
Radhakrishnan, Radhika
In this paper, I present a study of public and private CCTV surveillance of urban public spaces in India, which I term as ‘geographies of selective surveillance’ — areas where state power is discretionarily exercised and abused, and the presence of the state is experienced principally through police pickets and everyday violence unleashed on marginal occupants, rather than by access to civic amenities and systems of justice. I analyze these experiences of surveillance from the standpoint (Harding, 1992) of minoritized communities of street-level trans sex workers in Kolkata and Muslims in Mumbai. I then situate these experiences within the Matrix of Domination (Collins, 1990), a theoretical framework that explains how systems of power are configured. Defining empowerment as the power to gain control of and/or benefit from a scenario by weakening the Matrix of Domination, I analyze the structural determinants that make surveillance empowering or disempowering for these communities. I find that on the one hand, surveillance can be an empowering tool for minoritized communities as evidence of harm and innocence in cases of false accusations or when police officials typically refuse to believe their experiences due to discriminatory attitudes. On the other hand, surveillance also offers new opportunities for the private exploitation of the instruments of state power through corruption as well as community-based moral policing to be done with greater success and efficiency. I argue that what ultimately determines how surveillance is experienced is not laws and policies, but rather how power is discretionarily exercised on the ground, refracted through the influence of cultural and political beliefs, and discourse.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158809</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Commodifying and Consuming Endocrine Drugs in Republican China (1920s–1940s)</title>
<link>https://hdl.handle.net/1721.1/158808</link>
<description>Commodifying and Consuming Endocrine Drugs in Republican China (1920s–1940s)
Wang, Thelma Yuanzhi
Since the introduction of hormone pharmaceuticals into China during the early twentieth century, these substances became objects of fascination for a growing urban elite class. Drawing from newspapers, medical journals, and advertisements, this article examines the unique trajectories of hormone medicine in China. In conversation with previous scholarship on the dynamics of advertising and consuming hormones in China, this article examines specifically the discourses around the production and science of hormones. The circulation of hormones was informed by ideas of traditional Chinese medical cosmologies and enrolled in a nationalist movement encouraging the consumption of hormones produced by emerging Chinese medical entrepreneurs. This article provides a case study in a postcolonial context that problematizes historiographies depicting a linear transition of global hormone science from backwards to scientific, from traditional to modern.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158808</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Challenges in Object-Based Robot Navigation and Mapping</title>
<link>https://hdl.handle.net/1721.1/158807</link>
<description>Addressing Challenges in Object-Based Robot Navigation and Mapping
Lu, Ziqi
Developing fully autonomous systems that can safely traverse and interact with the environment has been a long-term objective in robotics. Many relevant tasks, such as planning and mobile manipulation, require the robot to possess an object-level understanding of the ambient world. In particular, it would be crucial to maintain a globally consistent objectbased map of the environment for these operations. Without external assistance – such as a prior map or a motion capture system – the robot needs to navigate and map the environment using an object-based SLAM system. This thesis is dedicated to addressing several key challenges in developing object SLAM systems. The first challenge arises from the ambiguity of object poses in single-view observations. When an object is observed from a single vantage point, it can often have multiple probable poses due to symmetry, occlusion, or perceptual failures. It would be difficult for an object SLAM system to incorporate such ambiguous measurements. To address this issue, we introduce an ambiguity-aware object SLAM method. We use Gaussian max-mixture models to represent and efficiently track the multiple object pose hypotheses, and gradually disambiguate the poses to construct a globally consistent object-level map. The second challenge is the performance degradation of neural networks when deployed in novel robot operating environments, commonly known as the domain gap problem. Specifically, when a pre-trained 6DoF object pose estimator is used in a novel environment, its pose predictions are often corrupted by outliers, and quantifying their uncertainties becomes difficult. Using these noisy predictions with unmodeled uncertainties as measurements in an object SLAM system can lead to significant estimation errors. To mitigate the problem, we propose a SLAM-supported self-training pipeline for domain adaptation of 6DoF object pose estimators. We exploit robust pose graph optimization (PGO) results to pseudo-label robot-collected images and fine-tune 6D object pose estimators. In particular, we develop an Automatic Covariance Tuning (ACT) method to model pose prediction uncertainties automatically during the PGO process. The third challenge is environmental changes. As changes occur in the scene, such as object insertion, removal, or rearrangement, the robot needs to efficiently detect these changes and update the map accordingly. While detecting and reflecting scene changes is relatively straightforward with handcrafted map representations like point clouds or voxels, it becomes significantly more difficult with learned radiance-field-based scene representations, such as Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) models. In this thesis, we develop a radiance-field-based 3D change detection method to identify 3D object-level scene changes. Our approach can rapidly detect object changes in cluttered environments represented with radiance field models from as few as a single post-change image observation. We also develop efficient update methods for NeRF and 3DGS models to reflect physical object rearrangements, guided by sparse post-change images. By addressing these challenges, this thesis advances the robustness and adaptability of object SLAM systems in real-world environments, paving the way for more reliable and autonomous robotic systems capable of complex interactions with the environment.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158807</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Sustainability in Agriculture and Food Systems</title>
<link>https://hdl.handle.net/1721.1/158806</link>
<description>Essays on Sustainability in Agriculture and Food Systems
Liu, Xinming
Agriculture and food systems face severe challenges from climate change, population growth, and food insecurity. These unprecedented issues leave millions vulnerable to hunger and malnutrition, underscoring the urgent need for a transition toward sustainable agriculture and food systems. The first research stream in this thesis focuses on promoting sustainability in agriculture, particularly through contract farming. In Chapter 2, we model contract farming as a bi-level optimization problem for a farmer and a company. We analytically demonstrate that different contract structures offer varying incentives for farmers to invest in quality-improving efforts, resulting in different levels of quality for agricultural products. Empirical analysis of production-level data supports these model predictions.&#13;
&#13;
The second research stream examines sustainability in food systems, specifically addressing the issue of food waste. In Chapter 3, we explore the impact of online grocery shopping on household food waste. Using large-scale Nielsen Consumer Panel data and instrumental variable analysis, we establish a statistically significant causal relationship, showing that households with higher frequency of online grocery shopping experience lower waste per capita, a proxy of household food waste. These findings emphasize the role of digital platforms in fostering sustainable consumption and call for continued support for online grocery shopping to mitigate consumer-level food waste. In Chapter 4, we turn to retail-level food waste. We design and implement behavioral interventions aimed at reducing food waste in restaurant kitchens in Ghana. As a Sub-Saharan African country, Ghana faces both food waste and food insecurity. Through a six-week field experiment and a difference-in-differences analysis, we demonstrate that interventions focused on public- and private-interest lead to 9% and 19% reductions in food waste in kitchens, respectively. Follow-up surveys and further analyses reveal that this result may be related to the demographic/socioeconomic characteristics of workers (e.g., age and income), their perception of power distance within the management hierarchy, and their satisfaction with restaurant management.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158806</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling AI Copilots for Engineering Design With Parametric, Graph, And Component Inputs</title>
<link>https://hdl.handle.net/1721.1/158805</link>
<description>Enabling AI Copilots for Engineering Design With Parametric, Graph, And Component Inputs
Zhou, Rui
Engineering design demands the synthesis of multimodal and often incomplete data—ranging from detailed parametric specifications, assembly graphs, visual references, and textual descriptions. Despite growing interest in generative models for design ideation and exploration, state-of-the-art approaches struggle with incomplete inputs, lack of support for modalities other than text and image, and limited controllability. This thesis addresses these gaps by unifying two complementary advances:&#13;
&#13;
First, we introduce a graph-guided diffusion approach for parametric data completion. By coupling Graph Attention Networks with a diffusion-based imputation mechanism, our method acts as a highly accurate and creative design auto-completion system for incomplete partial designs. On a dataset of 12,500 bicycles, this design imputation framework achieves a root mean square error (RMSE) of approximately 0.92 on numerical features and an error rate of around 0.18 for categorical attributes, outperforming both classical imputation methods such as MissForest, hotDeck, PPCA and advanced diffusion-based baselines such as TabCSDI. Moreover, it achives a Diversity Score of 3.10, surpassing all baselines, illustrating that the imputation process transforms incomplete data into multiple creative designs.&#13;
&#13;
Second, we develop a multimodal control architecture that can extend foundation models to condition their generation processes with all or a subset of parametric inputs, assembly graphs, component images, and textual constraints. This model tremendously enhances both the controllability and precision of the generation process of foundational generative models, enabling controlling modalities that were not possible before. We first show that our model excels at tasks that state-of-the-art models struggle on. We further validate the performance of our model with surrogate models that investigate individual features. Our model achieves 95% or greater R^2 scores on different continuous parameters. Further, we show that our model is able to generate creative and novel designs while maintaining a high level of precision. This enables engineers to guide generative outputs along precise dimensional, aesthetic, and functional targets. Across numerous trials of different settings, we observe that our pipeline robustly fuses tabular parametric information, assembly graphs, and reference component images to produce results aligned with both specification precision and creativity. &#13;
&#13;
Together, these contributions establish a coherent framework for AI-augmented design exploration. By viewing missing parameters as an opportunity for data-driven design autocompletion and by tightly integrating multimodal control over foundation models, this work elevates generative AI from a niche conceptual tool to a reliable design copilot. The implications of this thesis are profound: we show the possibilities and the pathways to AI copilot systems that can reduce data bottlenecks, broaden design spaces, and offer more thorough, constraint-adherent design candidates. As engineering problems grow in complexity and scale, the synergy of high-fidelity parametric imputation and multimodal control promises to accelerate innovation, cut development cycles, and guide human designers toward more inventive and manufacturable solutions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158805</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Past Market Outcomes: Evidence from the Music Industry</title>
<link>https://hdl.handle.net/1721.1/158804</link>
<description>Learning from Past Market Outcomes: Evidence from the Music Industry
Du, Jason
We leverage unique features of music albums to investigate how musicians learn from current products when developing new products. We find that songs on a musician’s next album tend to be more similar to the songs that are more successful on that musician’s current album. This effect is stronger when the musician has less experience, and when the song on the current album is more novel (for that musician). Our findings suggest that musicians learn from the success of previous songs when developing new songs, and that learning is stronger if the musician has more need to learn, and when the song contains more new information.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158804</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Freight Distribution During Disasters: Measuring and Improving Operational Performance of Critical Systems</title>
<link>https://hdl.handle.net/1721.1/158803</link>
<description>Freight Distribution During Disasters: Measuring and Improving Operational Performance of Critical Systems
Rana, Shraddha
The frequency and intensity of weather-related natural disasters have increased in the last őve decades. Moreover, the US faces more than a third of the disaster-related economic losses globally, majority of which are from storms. As the demand for distribution of essential freight increases during disasters, the physical and operational constraints decrease the capacity of the freight distribution systems. Accordingly, public and private-sector stakeholders seek disaster preparedness and response interventions to ensure timely and economic distribution of vital freight to the population in need. The goal of this thesis is to facilitate better strategic and tactical planning that results in higher operational performance of essential freight distribution systems during disasters. We study two critical freight distribution systems, namely, downstream fuel distribution and full truckload transportation of general freight. Truckload transportation plays a vital role in distributing relief supplies during emergencies, and fuel is required for humanitarian operations such as running generators, moving emergency response crews, and evacuation of the affected population. We collaborate with The US Federal Emergency Management Agency in response to multiple North-Atlantic storms and measure the operational performance of these systems under regular and disaster conditions, as well as identify public and private-sector interventions to make the performance better during future disasters. Our research contributes to the bodies of disaster modeling and management, fuel distribution, service procurement, and truckload procurement literature by, i) creating system level understanding of multi-server tandem cyclic queues with time-limited customers, ii) studying process improvement interventions for disasters, iii) quantifying the magnitude, geographical extent, timing, and duration of the causal effects of disaster conditions and consequent disaster relief activities on transportation procurement prices, iv) using datadriven analysis to design ŕexible truckload contracts that consider uncertainty in demand, and v) modeling dynamic-pricing where the buyer offers the price to service providers. In this thesis, we provide several actionable insights for public and private-sector stakeholders to manage freight distribution during future disasters. We identify which process improvement interventions are best suited for which type of downstream fuel distribution system, and which storage terminals should be prioritized under limited budget. We also measure how private-sector shippers should account for changes in truckload spot procurement prices during disaster episodes to manage their budgets and operational decisions. Moreover, we offer an alternate dynamic-priced truckload contract solution for public-sector shippers that deal with uncertain episodic demand in response to disasters. We demonstrate the impact of our research by implementing it to multiple real-life case studies in the US. Furthermore, our methodologies and results are generalizable to other geographical regions as well as other disaster conditions. Thus, we hope that they are used by public and private-sector actors to better manage essential freight distribution moving forward.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158803</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Foundation Models in Medical Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/158802</link>
<description>Natural Language Foundation Models in Medical Artificial Intelligence
Palepu, Anil
Over the past decade, the transformative rise of deep learning, particularly large language models (LLMs), has inspired experts across diverse fields, including healthcare, to think deeply about how artificial intelligence (AI) can revolutionize their fields. In this time, general foundation models, rather than narrow and highly specialized task-specific systems, have begun to emerge as the dominant paradigm. In healthcare, AI systems are already seeing widespread implementation in a variety of real-world use cases, perhaps without adequate evaluation and validation. Indeed, their often impressive ability to process natural language, a crucial medium of knowledge and communication in medicine, suggests that many of these modern foundation models may hold immense promise in the healthcare space. However, there exists a need to better study and understand their strengths, limitations, and robustness, particularly in more realistic and clinically relevant settings.&#13;
&#13;
This thesis focuses on two key classes of natural language-driven foundation models --- Contrastive Language Image Pretraining (CLIP) models, and Large Language Models (LLMs) --- and investigates how such models can encode and deliver useful clinical knowledge, for tasks like chest x-ray interpretation, differential diagnosis, history taking, and clinical management. As a whole, this thesis aims to further our collective understanding of the potential of natural language foundation models in medicine, while emphasizing the need for significant further research to address real-world challenges and understand the scopes in which such systems can be implemented safely and efficaciously.&#13;
&#13;
In the first chapter, I provide an overview of some relevant background, including contrastive language-image pretrained models, large language models, and their evaluation in the medical domain. &#13;
&#13;
In chapter 2, we improve the CLIP architecture for chest x-ray interpretation through a novel regularization technique applied during pre-training, and use this model for the zero-shot identification of chest x-ray findings.&#13;
&#13;
In chapter 3, we examine the reliability of CLIP-style models.  First, we evaluate their robustness to shortcut learning to understand the potential protective effects of text self-supervision. Next, we explore how conformal prediction can be used to control zero-shot classification performance and preempt compatible inputs for these CLIP-style models.&#13;
&#13;
In chapter 4, I describe the development of Articulate Medical Intelligence Explorer (AMIE), a conversational diagnostic AI fine-tuned with simulated medical dialogue. We evaluate the diagnostic capabilities of AMIE in two randomized studies with primary care physicians; first, in challenging clinicopathological conference (CPC) cases, and then in virtual text-based objective structured clinical examinations (OSCE).&#13;
&#13;
In chapter 5, we explore AMIE's management reasoning capabilities in two subspecialty domains: genetic cardiovascular disease and breast oncology. In these studies, we design domain-specific assessments for case management and compare AMIE's performance to generalists under subspecialist evaluation, as well as studying its potential assistive effect.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158802</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Persian Lessons: Islamic Art in America, circa 1876–1925</title>
<link>https://hdl.handle.net/1721.1/158801</link>
<description>Persian Lessons: Islamic Art in America, circa 1876–1925
Goldberg, Roxanne
This dissertation investigates the prehistory of academic Islamic art history in the United States through the lens of American cultural history. It shows that between the US Centennial in 1876 and the inauguration of the Pahlavi dynasty in Iran in 1925, aesthetic theory and American citizenship were debated in the United States through objects identified, regardless of actual provenance, as “Persian.” This cultural phenomenon coincided with the acceleration of the transnational market for Islamic art, including architectural tiles, single-page paintings, and hand-knotted pile carpets. Examining instances of collecting, classifying, displaying, and otherwise handling and beholding Islamic art within different scales of home (family, nation, and international Christianity) and spaces of pedagogy (the living room, commercial gallery, advertisement, schoolroom, voluntary association, museum, and world’s fair), "Persian Lessons" reveals that notions of Persian art were instrumentalized in the service of competing American identities and ideologies in the late nineteenth and early twentieth centuries. &#13;
&#13;
Through an analysis of published writings, museum archives, and government documents, the study shows how the art critic S. G. W. Benjamin, who also served as the first US diplomat to Iran in 1883–85, constructed an ideal of the Persian artist to champion liberal individualism and public art education. An investigation into the presence of Muslim prayer carpets in American Christian homes reveals that Sarkis Nahigian and other diasporic entrepreneurs from the Ottoman Empire became partners to middle-class women, who jointly turned the Oriental carpet into a symbol of obligation to the American nation. Lastly, an examination of visual and textual evidence recasts a collection of more than 20,000 objects—given to the Museum of Fine Arts, Boston, and William Hayes Fogg Art Museum of Harvard University by design pedagogue and museum patron-administrator Denman Waldo Ross between 1888 and 1935—as a tool of “training for citizenship.” Ross regarded Persian textiles and single-page paintings as value-neutral objects for the design education that he believed bolstered participatory democracy. &#13;
&#13;
The fifty-year history that this dissertation covers concludes in the late 1920s and '30s with the establishment of the first official positions in Islamic art history at universities and museums in the United States. "Persian Lessons" thus shows that the founding of Islamic art history as an academic discipline was not simply imported from Europe. Professionalization stabilized a half century of domestic engagement with Persian art as a polysemic guiding light for American culture and society.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158801</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Impacts of Digital Sketching on Concept Generation in Early Stage Design</title>
<link>https://hdl.handle.net/1721.1/158800</link>
<description>Assessing Impacts of Digital Sketching on Concept Generation in Early Stage Design
Das, Madhurima
Digital design tools have become increasingly popular for facilitating designers with different steps of the design process because they can simplify or automate certain components of these steps. Computer Aided Design (CAD) tools have assisted designers with tasks such as modeling and visualizing products prior to production and easily creating engineering drawings for manufacturing. Artificial Intelligence (AI) tools are being explored as collaborators who assist designers with interpreting sketches, assessing user needs, and generating ideas. Digital sketching tools such as tablets are a popular way for designers to easily create drawings that include different colors and styles and create multiple drafts of a concept by copying and pasting elements from previous sketches. However, the introduction of new tools into the design process always has broader implications for the design process. For instance, using CAD tools too early in the process can lead to design fixation and result in designers thinking a concept is more refined than it actually is due to the high quality and polish of the visualization created. Many researchers are now investigating when and how the best way to use AI tools in the design process is, but all struggle with the associated ethical implications of using the right training data and ensuring that the results are validated due to the serious risks related to misuse of AI. This dissertation focuses on one such digital design tool: tablets that are used for sketching. In an effort to expand the discipline’s understanding of how tablet use for sketching may enhance or detract from the design process, this thesis describes a series of studies investigating differences in ideation sketch attributes between tablets and paper/pen. Several of these sketch attributes have been linked with success in design- for instance, creating more sketches during ideation is linked with having better eventual design outcomes. This work investigates how sketch quality and quantity is impacted by the tools used for a short high level brainstorming session as well as a more detailed engineering concept generation task. Subsequently, it explores differences in content or novelty of ideas generated using each medium. Finally, it examines ways in which designers’ ideas evolve throughout the ideation process on both tablets and pen and paper. These aspects of the ideation process are important to understand, especially if the use of tablets leads to different results. The first area of investigation is related to exploring differences in sketch metrics including quantity, quality, and understandability between different sketching tools. These metrics have been found to be related to longer-term design outcomes and perceived creativity of concepts, so understanding the effect of the tablet on these sketch metrics can provide an understanding of how using a tablet for sketching could enhance or detract from overall design performance. The first study in this section investigates differences between pencil, pen, and tablet sketches during a short concept generation exercise and finds that sketch quality was highest for pencil drawings and lower for pen drawings but that tablet drawings do not significantly differ in quality from either pencil or pen drawings. Subsequently, a longer engineering design specific concept generation exercise was conducted to compare tablet sketching to pen and paper sketching. Here, there were no differences found in sketch quantity or understandability between paper and tablet. However, sketch quality, smoothness, and proportion/accuracy were all found to be higher on pen and paper than tablet. The second area of investigation explores whether or not using a tablet influenced designers’ ideation patterns. For instance, does the ability to copy and paste result in designers creating more interrelated ideas during brainstorming instead of exploring a variety of different design directions? There were no major differences found in the overall quantity of concept evolution present between tablet sketching and pen and paper sketching. However, tablet sketches across an ideation session had statistically significantly more concept chaining (related ideas appearing in a row) than paper and pen sketches despite having a similar number of related ideas overall. Additionally, concept chaining patterns were different for design prompts that had more than one functional requirement since not all ideas addressed all parts of the design prompt. However, for these prompts, the results from the primary functional requirement exhibited the same concept chaining patterns with more chaining present for tablet sketching than paper and pen sketching. The final area of investigation explores how designers’ ideas themselves are influenced by the sketching tool used through explorations of concept novelty and concept evolution. One study investigated novelty differences in concepts generated on tablet vs paper and found no correlation between the sketching tool used and the novelty of concepts generated. A second study was conducted to specifically compare designers’ own understanding of the interrelatedness of their ideas with the interrelatedness that could be assessed from the functional similarity of their sketches. Here, designers’ and reviewers’ assessments were found to not be aligned. In other words, sketches as standalone design artifacts did not communicate the extent of interrelatedness of concepts that was clear to the designer. Furthermore, the sketching tool used (tablet vs paper and pen) does not influence the level of agreement between designer and reviewer assessments. As such, using a tablet for sketching does not enhance or detract from the level of interrelatedness represented in sketches. These results suggest that assessing visual or functional similarity from sketches alone, regardless of the sketching tool used, may be insufficient in understanding all the relationship between a series of concepts as understood by the designer. Overall, these results indicate that using tablets as sketching tools does not have a clear significant benefit or burden on designers during ideation. It does not appear to enhance designers’ creative skills when it comes to sketch quantity or novelty though it did result in lower quality sketches, which has implications for the perceived creativity of concepts. Tablets were found to exhibit more instances of concept chaining than paper and pen sketches, though this trend did not persist when designers assessed their own concepts. Finally, this dissertation demonstrates that it is critical to seek designer input in identifying similarities across sketches as functional similarity may not be aligned with designers’ own understanding of which of their ideas are related.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158800</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sorption-based atmospheric water harvesting: from atoms to applications</title>
<link>https://hdl.handle.net/1721.1/158799</link>
<description>Sorption-based atmospheric water harvesting: from atoms to applications
Zhong, Yang
Thirteen thousand trillion liters of water in the atmosphere is a natural resource found anywhere on the earth, and available to anyone. Sorption-based atmospheric water harvesting (SAWH) is the extraction of water vapor using sorbent materials across a broad spectrum of relative humidity, which opens new avenues to address water scarcity faced by two-thirds of the world’s population. SAWH technologies gained significant attention in 2017 with the development of a solar-powered system utilizing metal-organic framework (MOF) sorbents to extract water from the air. While groundbreaking, this proof-of-concept device produced only a few milliliters of water, far from sufficient to meet even a single person’s daily water needs. A large gap thus remains between laboratory discoveries and real-world applications. This thesis aims to advance the understanding of SAWH technologies from atoms to applications. It begins with a multiscale perspective on SAWH technologies towards real-world applications, addressing knowledge gaps across various length scales. Through this multiscale approach, we developed a framework that can bridge material innovations with device realization. At the molecular scale, the thesis seeks to address a fundamental challenge: the inability to directly observe water sorption processes. To overcome this long-standing challenge, we introduced the use of cryogenic transmission electron microscopy (cryo-TEM) to probe water sorption in nanoporous materials at the single-pore level. This approach allows us to image water sorption and material structures with atomic resolution. Owning to the high resolution and in situ capabilities of cryo-TEM, we resolved a partially water-filled state of MOF crystals and observed that water molecules tend to occupy the centers of pores and fill neighboring pores once adjacent ones are filled. This technique offers new insights into sorption mechanisms and holds significant potential for the development of new sorbent materials. Building on the material-device-bridging framework, we proposed a dual-stage device architecture inspired by multistage distillation in desalination, where condensation heat from one stage drives desorption in the next, increasing productivity and thermal efficiency. To guide materials selection based on operating conditions, a universal thermodynamic model is developed to predict the efficiency of sorbent materials given their sorption isotherms. Additionally, this analysis reveals practical strategies to improve devicelevel sorption kinetics and heat transfer performance, pushing the technology toward thermodynamic limits. At the global scale, the framework enables the optimization of material deployment tailored to diverse climatic conditions. The real-world impact is further demonstrated through a technoeconomic assessment, which illustrates SAWH technology’s competitiveness with bottled and tap water and pathways to further improve its cost-effectiveness. The thesis concludes with an outlook on future opportunities for SAWH technologies and a discussion of their societal and environmental impacts at scale, including their potential role in mitigating climate change.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158799</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Inference with Survival Outcomes via Orthogonal Statistical Learning</title>
<link>https://hdl.handle.net/1721.1/158798</link>
<description>Causal Inference with Survival Outcomes via Orthogonal Statistical Learning
Xu, Shenbo
The field of causal inference has recently made great strides in incorporating machine learning into confounding adjustment and estimation of heterogeneous treatment effects (HTE). However, there were some gaps regarding survival outcomes.&#13;
&#13;
First, overlap-weighted effect estimators based on machine learning nuisance models were not available for such outcomes. Thus, researchers wishing to mitigate bias and variance from poor overlap had to accept potential bias from nuisance model misspecification in its place. In Chapter 2, we fill this gap by proposing a class of one-step cross-fitted double/debiased machine learning estimators for cumulative weighted average treatment effects for both survival outcomes and competing risk outcomes. Our approach combines importance sampling, semiparametric theory, and Neyman orthogonality to resolve both model misspecification and lack of covariate overlap between treatment arms in observational studies with censored outcomes. We give regularity conditions for the consistency, asymptotic linearity, and semiparametric efficiency bounds of the proposed estimators. Through simulation, it is shown that the proposed estimators do not require oracle parametric nuisance models. We apply the proposed estimators to compare the effects of two first-line anti-diabetic drugs on cancer outcomes.&#13;
&#13;
Second, a wide range of machine learning methods (or ”learners”) for estimating heterogeneous treatment effects were not applicable to estimating effects on survival outcomes, particularly in the presence of competing risks. In Chapter 3, we fill this gap by developing several once-for-all (orthogonal) censoring unbiased transformations which convert time-to-event data into continuous outcomes, such that all HTE learners and oracle rates for continuous outcomes can be borrowed. Our approach not only reduces the pressing need to develop various HTE learners for censored outcomes and especially competing risks, but also fully leverage the state of the art of existing schemes. Through direct application of HTE learners to these transformed continuous outcomes, we obtain consistent estimates of heterogeneous cumulative incidence effects, total effects, and separable direct effects. We provide generic model-free learner-specific oracle inequalities bounding the finite-sample excess risk. The oracle efficiency results depend on the oracle selector and estimated nuisance functions from all steps involved in the transformation. We demonstrate the empirical performance of the proposed methods in simulation studies.&#13;
&#13;
An important application area for causal inference methods, and one which originally motivated my interest in the field, is drug repurposing. In Chapter 4, we apply the methods of Chapter 2 to investigate whether metformin, a diabetes medication, might also have unexpected beneficial effects on cancer. The analysis encountered three major challenges: poor overlap between treatment groups, model misspecification, and pre-cancer death as competing risks for cancer incidence. To resolve these issues simultaneously, we take balancingweighted total cause-specific effects, controlled direct effect, and separable effects as causal estimands and develop balancing-weighted double/debiased machine learning estimators for both cumulative incidence functions and restricted mean time lost, with all estimators satisfying Neyman orthogonality. Using the Clinical Practice Research Datalink (CPRD) data, we find that metformin revealed a preventive direct effect on cancer incidence over sulfonylureas. The results also demonstrate the advantage of choosing the average treatment effect for the overlap population as the target quantity.&#13;
&#13;
Finally, just as machine learning helps to automate nuisance model estimation for confounding adjustment and modeling effect heterogeneity, causally informed artificial intelligence (AI) and large language models (LLMs) might help to automate hypothesis generation for drug repurposing and surveillance opportunities. In Chapter 5, we explore this potential by developing a high-throughput screening approach to evaluate available drugs across multiple diseases. The screening methodology aims to identify drug-disease pairs with significant positive signals that could represent promising repurposing candidates, while also detecting pairs with negative signals that might indicate potential safety concerns–both being critical aspects for pharmacoepidemiology research. This systematic approach leverages the convergence of expanding healthcare data sources and modern data science advances to establish a data-driven framework for drug repurposing discovery and pharmacovigilance.&#13;
&#13;
To conclude, we discuss the limitations of the proposed methods and provide possible future research directions.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158798</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shut Up and Dribble? Exploring the Real Estate Strategies and Trends of NBA Teams</title>
<link>https://hdl.handle.net/1721.1/158797</link>
<description>Shut Up and Dribble? Exploring the Real Estate Strategies and Trends of NBA Teams
Nguyen, Viet
NBA teams have always had to think about real estate through one certain lenses: the arena they play their 41 home games in (plus any subsequent playoff games). But now, NBA teams have evolved past only just thinking about the arena. Teams have increasingly gotten involved in real estate development. This thesis seeks to explore the impact of real estate as a revenue driver for NBA teams, trends observed, and strategic decisions that teams must consider. This thesis will explore current real estate activities of all 30 NBA teams and will examine the choices that teams must make regarding arenas, real estate development, and practice facilities. The findings will help teams and municipalities understand best practices for team-driven real estate, and how strategies can vary team by team based on their situations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158797</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning over Hierarchical Abstractions for Long-Horizon Planning in Robotics</title>
<link>https://hdl.handle.net/1721.1/158796</link>
<description>Reasoning over Hierarchical Abstractions for Long-Horizon Planning in Robotics
Bradley, Christopher P.
We aim to enable robots to act intelligently in complex environments not explicitly designed around them. In order to do so, robots can simplify decision making by forming hierarchical abstractions of their world, and planning within those representations. However, in reality, the types of abstractions robots are able to build are often poorly aligned with the planning problems they must solve, which limits how useful those abstractions can be in efficient decision making. For example, autonomous agents struggle in many real world scenarios, particularly when their environments are large, cluttered with obstructions, or beset by uncertainty. These factors often imply that decisions made at higher levels of abstraction may not be easily refined to low level plans, leading to backtracking during either search or execution. In this thesis, we consider contributions which improve the efficiency and quality of long-horizon hierarchical planning in robotics. Specifically, we propose approaches which explicitly reason about the imperfections of the abstractions available to robots during planning, and show how those methods can improve performance on a variety of tasks and environments.&#13;
&#13;
There are three primary settings for which we make contributions in this thesis. First, we will consider the problem of solving tasks in partially revealed environments, wherein our abstract plans cannot be known to be feasible until we attempt execution because the world is not fully known at planning time. To solve this problem, we first develop a high level planning representation which recognizes that actions that enter unknown space can either succeed or fail with some probability. The first contribution of this work is then to learn to predict the feasibility and cost of actions within that abstraction from visual input. We also describe a method for planning which uses these predictions, and we are able to show that our approach can generate plans that are significantly faster at completing tasks in unknown environments experimentally when compared with heuristic driven baselines. Next, we will discuss work in Task and Motion Planning (TAMP), where the world is fully known, but the problems require complex interaction with the environment to the point that we must intelligently guide search in order to find plans efficiently. We build upon our work in the first setting by once again learning to predict the outcome and cost of different sub-tasks within a TAMP abstraction. We further contribute a novel method to guide search in this setting for plans which minimize cost given our learned predictions, and demonstrate the ability to find faster plans than established TAMP approaches both in simulation, and on real world robots. In our final problem setting, we consider attempting to solve TAMP problems in real world, large-scale environments. To do this, we define an approach for constructing tractable planning abstractions from real perception using hierarchical scene graphs, ensuring that when we refine our abstract plans within these representations, the low-level trajectories still satisfy the given task’s constraints. A major contribution of this work is an approach for planning efficiently in these domains by pruning provably superfluous information from the world model. The unifying aim of the work in this thesis is to develop approaches which enable robots to solve complex tasks in large-scale, real world environments without human intervention. To that end, across all contributions, we demonstrate experimentally on real robots the importance of accounting for imperfections in hierarchical abstraction during planning.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158796</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>New tools for Bayesian optimal experimental design and kernel-based generative modeling</title>
<link>https://hdl.handle.net/1721.1/158795</link>
<description>New tools for Bayesian optimal experimental design and kernel-based generative modeling
Li, Fengyi
This thesis develops new computational approaches for two canonical problems in statistics and machine learning: optimal experimental design and generative modeling.&#13;
Optimal experimental design (OED) is important to model development for science and engineering applications and beyond, especially when only a small number of observations can be taken or experiments performed, due to resource limitations. In the Bayesian setting, a useful criterion for the importance of candidate experiments is the expected information gain (EIG) from prior to posterior, or equivalently, the mutual information (MI) between candidate observations and the parameters of interest. Yet estimating EIG for a given design can be quite challenging in nonlinear/non-Gaussian models, and for high-dimensional parameters and observations. &#13;
&#13;
In the first part of the thesis, we introduce new methods for estimating EIG based on transportation of measure. Specifically, we use marginal and conditional density estimates, obtained with semi-parametric transport models, in a Monte Carlo estimator. The density estimates are obtained by solving convex optimization problems. This framework is also compatible with implicit models, where one can simulate from the likelihood or prior but the associated density functions are unknown. We identify the optimal scaling of sample sizes between the "inner" density estimation steps and the "outer" EIG estimation, and demonstrate the efficiency of these choices numerically. If the dimensions of the parameters or observations are high, however, direct density estimation becomes intractable. Here, we use gradient-based information bounds, obtained via log-Sobolev inequalities, to identify optimal projections of the parameters and observations, and then apply our transport-based EIG estimation scheme. &#13;
&#13;
We next study the problem of cardinality-constrained observation selection to maximize MI in non-Gaussian settings, i.e., choosing the most informative subset of k observations from a candidate pool of size n &gt; k. Finding the exact solution is to this combinatorial optimization problem is computationally costly, so we resort to greedy approaches based on computationally inexpensive lower bounds for MI. Here we again use log-Sobolev inequalities to construct such lower bounds for certain classes of non-Gaussian distributions, and exploit these lower bounds within the combinatorial problems. We demonstrate that our method outperforms random selection strategies and Gaussian approximations in many settings, including challenging nonlinear design problems with non-additive noise.&#13;
&#13;
In the second part of the thesis, we turn our attention to generative modeling, which can be understood as the problem of drawing new samples from an unknown distribution, from which a fixed sample is available. Our approaches employ kernel-type algorithms based on diffusion maps.&#13;
First, we propose an interacting particle system for generative modeling, based on diffusion maps and Laplacian-adjusted Wasserstein gradient descent (LAWGD). Diffusion maps are used to approximate the generator of the corresponding Langevin diffusion process from samples, and hence to learn the underlying data-generating manifold. LAWGD enables efficient sampling from the target distribution given the generator of the Langevin diffusion process, which we construct here via a spectral approximation via kernels, computed with diffusion maps. Our method requires no offline training and minimal tuning, and can outperform other approaches on data sets of moderate dimension.&#13;
&#13;
Second, we propose a generative model combining diffusion maps and Langevin dynamics. Diffusion maps are used to approximate the drift term from the available training samples, which is then implemented in a discrete-time Langevin sampler to generate new samples. By setting the kernel bandwidth to match the time step size used in the unadjusted Langevin algorithm, our method effectively circumvents any stability issues typically associated with time-stepping stiff stochastic differential equations. We demonstrate the performance of our proposed scheme through experiments on synthetic datasets of increasing dimension, and on a conditional sampling problem arising in stochastic subgrid-scale parametrization of a dynamical system.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158795</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on Spatial Constraints and Gender Equality: the Impact of COVID-19 Lockdowns on Work-from-Anywhere Dynamics and Gender Equality in Job Searches</title>
<link>https://hdl.handle.net/1721.1/158794</link>
<description>Essays on Spatial Constraints and Gender Equality: the Impact of COVID-19 Lockdowns on Work-from-Anywhere Dynamics and Gender Equality in Job Searches
Labuzova, Tatiana
This dissertation explores the intersection of spatial constraints and gender equality by leveraging the COVID-19 lockdowns as a natural experiment to study the impact of work-from-anywhere (WFA) dynamics on job search behaviors. The introduction of mandatory lockdowns drastically shifted the labor market landscape, prompting an increase in the demand for flexible work formats. Utilizing data from over one million job seekers on an online employment platform, this research examines how the sudden wide availability of remote work options influenced job search activities differently across genders. Using unique data from a large online job platform, a comparison of pre- and post-COVID-19 lockdown data shows that women significantly increased their engagement with geographically flexible job postings, reacting more strongly than men to the rise in remote job opportunities at both the job viewing and application stages. This shift also resulted in a narrowing of the wage gap in positions viewed and applied for during the post-lockdown period compared to pre-lockdown benchmarks. Notably, the study identifies variations in job search behavior among those likely constrained by domestic responsibilities. While differences in job posting views suggest an initial differential impact, such differences vanish at the application stage. Collectively, these results indicate that the pandemic-induced shift towards remote work has contributed to a gender-equalizing effect in the job market, including those navigating domestic labor constraints. This research not only highlights the transformative potential of WFA arrangements in promoting gender equality but also provides insights into the mechanisms that drive these changes within the labor market. &#13;
&#13;
Keywords: organizational studies, gender inequality, flexible working arrangements, hiring, applications processes, decision making, digital platforms.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158794</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems-Theoretic Framework For Safety-Driven Development of System Architectures</title>
<link>https://hdl.handle.net/1721.1/158793</link>
<description>A Systems-Theoretic Framework For Safety-Driven Development of System Architectures
Poh, Justin Wei Siang
Modern complex systems are increasingly expected to exhibit emergent properties such as safety and security even as they become more complex, interconnected, and reliant on software than ever before. Because of this evolution in the characteristics of these systems, the methods available today for developing system architectures no longer provide systems engineers with adequate design support. As a result, it is becoming increasingly challenging for systems engineers to develop system architectures that exhibit emergent properties like safety. This thesis addresses this problem by developing a safety-driven architecture development framework that enables the design of emergent properties such as safety into a system architecture from the beginning. The key idea is that the results from a hazard analysis process known as Systems Theoretic Process Analysis (STPA) should drive design decisions. The framework therefore starts with an initial STPA analysis of the system to determine how unsafe or undesirable behavior could occur. Structured and systematic processes are then provided to help systems engineers use the STPA results to develop the required control behavior of the system and explore possible system architecture options to implement that control behavior. This framework therefore enables systems engineers to make more informed early architectural design decisions driven by safety considerations. This framework is applied to an Urban Air Mobility (UAM) case study to demonstrate that it provides the necessary design support to enable the development and refinement of an air traffic management (ATM) architecture for UAM. When creating a system architecture, assumptions may also need to be made to mitigate the inherent uncertainties and lack of detailed information about the system at that early stage of design. However, these assumptions are used as the basis for design decisions, and it is important that they remain valid to avoid flaws in the architecture arising when underlying assumptions become invalid. Thus, this thesis also develops and demonstrates a supporting framework to help identify these underlying assumptions and ensure they remain valid both during system development and after the system is placed into operation. Modern complex systems are increasingly expected to exhibit emergent properties such as safety and security even as they become more complex, interconnected, and reliant on software than ever before. Because of this evolution in the characteristics of these systems, the methods available today for developing system architectures no longer provide systems engineers with adequate design support. As a result, it is becoming increasingly challenging for systems engineers to develop system architectures that exhibit emergent properties like safety. This thesis addresses this problem by developing a safety-driven architecture development framework that enables the design of emergent properties such as safety into a system architecture from the beginning. The key idea is that the results from a hazard analysis process known as Systems Theoretic Process Analysis (STPA) should drive design decisions. The framework therefore starts with an initial STPA analysis of the system to determine how unsafe or undesirable behavior could occur. Structured and systematic processes are then provided to help systems engineers use the STPA results to develop the required control behavior of the system and explore possible system architecture options to implement that control behavior. This framework therefore enables systems engineers to make more informed early architectural design decisions driven by safety considerations. This framework is applied to an Urban Air Mobility (UAM) case study to demonstrate that it provides the necessary design support to enable the development and refinement of an air traffic management (ATM) architecture for UAM. When creating a system architecture, assumptions may also need to be made to mitigate the inherent uncertainties and lack of detailed information about the system at that early stage of design. However, these assumptions are used as the basis for design decisions, and it is &#13;
important that they remain valid to avoid flaws in the architecture arising when underlying assumptions become invalid. Thus, this thesis also develops and demonstrates a supporting framework to help identify these underlying assumptions and ensure they remain valid both during system development and after the system is placed into operation.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158793</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explicit formulas for weighted orbital integrals for the inhomogeneous and semi-Lie arithmetic fundamental lemmas conjectured for the full spherical Hecke algebra</title>
<link>https://hdl.handle.net/1721.1/158792</link>
<description>Explicit formulas for weighted orbital integrals for the inhomogeneous and semi-Lie arithmetic fundamental lemmas conjectured for the full spherical Hecke algebra
Chen, Evan
As an analog to the Jacquet-Rallis fundamental lemma that appears in the relative trace formula approach to the Gan-Gross-Prasad conjectures, the arithmetic fundamental lemma was proposed by Wei Zhang and used in an approach to the arithmetic Gan-Gross-Prasad conjectures. The Jacquet-Rallis fundamental lemma was recently generalized by Spencer Leslie to a statement holding for the full spherical Hecke algebra. In the same spirit, there is a recent conjectural generalization of the arithmetic fundamental lemma to the full spherical Hecke algebra. This paper formulates another analogous conjecture for the semi-Lie version of the arithmetic fundamental lemma proposed by Yifeng Liu. Then this paper produces explicit formulas for particular cases of the weighted orbital integrals in the two conjectures mentioned above.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158792</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooling with less: Design and simulation of multifunctional building components for a material-efficient, heat-resilient architecture</title>
<link>https://hdl.handle.net/1721.1/158791</link>
<description>Cooling with less: Design and simulation of multifunctional building components for a material-efficient, heat-resilient architecture
Gascón Alvarez, Eduardo
As temperatures rise globally and the demand for housing intensifies, designing affordable buildings for heat resilience and with low carbon emissions becomes crucial. Conventional air conditioning (AC) systems, although often an effective and accessible cooling solution, are energy-intensive and typically fail to consider local climatic and urban contexts. This work alternatively focuses on the opportunity behind designing building components (such as slabs, blocks, roofs, or footings) for multifunctionality, integrating passive strategies and low-energy cooling systems within them in a material-efficient manner. Collapsing multiple functions into a single building component is typically regarded as a strategy that leads to better overall performance and reduced costs compared to implementing each function separately. However, the effectiveness of this strategy in cooling-dominated climates and in the context of the current climate crisis remains underexplored. &#13;
&#13;
The dissertation proposes new designs and evaluation methods for three multifunctional building components: multi-hollowed blocks (ceramic blocks with interior air pockets), shaped chilled slabs (shaped concrete slabs with embedded radiant ceiling systems), and integrated heat sinks (thermally activated concrete footings and roofs). Each component is designed to optimize a specific cooling strategy based on its context within the building and intrinsic material properties - thermal mass, radiant cooling, and ground/radiative cooling. Chapter 2 demonstrates how shape-optimized ceramic blocks can double the heat capacity of existing commercial solutions without additional material or reduce their weight by 33% while increasing the heat capacity by 23%. Chapter 3 presents slab geometries that achieve embodied carbon reductions of up to 50% relative to conventional prismatic floors while reducing operational carbon by 12-14%. Chapter 4 finds that buildings in temperate climates with a Floor Area Ratio (FAR) of up to 4.5 can meet 100% of the cooling demand exclusively through heat dissipation systems integrated into the building’s foundations and roof.  Methodologically, this research puts together heat transfer theory and analytical models with state-of-the-art shape optimization methods; this effort results in a fast and accurate multi-objective simulation framework tailored for early design stages.&#13;
&#13;
This thesis provides, for the first time, validated methods and quantitative results that support the viability of multifunctional building components in cooling-dominated climates, optimizing the shape of walls, blocks, foundations, and roofs to improve their structural and thermal performance simultaneously, reducing their weight and improving buildings’ resilience to heat.  From a climate adaptation perspective, this approach ensures that buildings are ready for extreme heat even when active systems are unavailable due to, for example, a power outage. From a carbon mitigation perspective, the presented results highlight the potential to reduce the whole-life carbon of buildings by shape-optimizing components for enhanced thermal performance and material efficiency.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158791</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrospray Thrusters in Chemical-Electric Multimode Propulsion for Small Satellites</title>
<link>https://hdl.handle.net/1721.1/158790</link>
<description>Electrospray Thrusters in Chemical-Electric Multimode Propulsion for Small Satellites
Bruno, Amelia R.
Propulsion for small spacecraft is typically one of two modes, chemical or electric. These modes offer complementary propulsive performance: chemical propulsion provides high thrust and low specific impulse, while electric propulsion provides the inverse. As such, having access to both modes on the same spacecraft (i.e. multimode propulsion) is extremely useful. Unfortunately, the conventional propellants used by chemical and electric thrusters are highly incompatible, making this particularly difficult on small spacecraft that lack the mass, power, and volume to accommodate two separate propulsion systems. However, recent advancements in green monopropellants -- developed as less-toxic alternatives to hydrazine in chemical monopropellant thrusters -- have created a new family of ionic liquids monopropellants, making them the natural propellant for a highly compact form of electric propulsion known as electrospray thrusters. This presents a unique opportunity for a propellant to be shared between two propulsion modes, decreasing required mass and volume to be feasible for small spacecraft. This thesis examines the use of ionic liquid monopropellants in electrospray thrusters for a multimode chemical-electric propulsion system. This thesis focuses particularly on ASCENT, a high-maturity monopropellant with flight heritage in chemical thrusters.&#13;
&#13;
In this work, the performance of ASCENT in the MIT ion Electrospray Propulsion System (iEPS) is extensively characterized. Experimental work includes ion plume diagnostics, indirectly and directly obtained performance estimates, temperature-dependent performance estimates, and extended duration firing behavior. Preliminary studies of similar monopropellants are also conducted to assess their use in a multimode system. To support an upcoming technology demonstration flight, a new multimode-compatible iEPS thruster tank is designed, fabricated, and validated. The integration and operation requirements for this thruster in a flight-ready system are defined. Finally, the mission benefits of an ASCENT multimode system for CubeSats are compared against current commercial options using an Earth observation mission case study.&#13;
&#13;
This work finds that an iEPS thruster with ASCENT propellant has thrust of 9-15 µN, a specific impulse of 600-750 seconds, and a total efficiency of 18-22%, depending on current setpoint. We find that ASCENT is slightly volatile in high vacuum, which causes time-dependent losses in efficiency and specific impulse from gradual propellant evaporation. This volatility may also increase thruster lifetime by mitigating the risk of thruster failure by emitter flooding. This work also identified a modified version of ASCENT, created when the propellant is exposed to iron. This modified version produces a dramatically higher thrust and thrust-to-power compared to standard ASCENT. Additionally, flight-ready configurations of a multimode system are defined for 6U, 12U, and 27U CubeSats. A case study analysis found that the benefits of a chemical-electrospray multimode system are best realized at the 12U scale and above. Overall, this thesis provides critical insights on the performance, integration, and operation of electrospray thrusters with ionic liquid monopropellants. These results can then be used to enable a multimode propulsion system for small satellites.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158790</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Crafting Cannabinoid Capitalism: Health, Sustainability, and Regeneration in the United States</title>
<link>https://hdl.handle.net/1721.1/158789</link>
<description>Crafting Cannabinoid Capitalism: Health, Sustainability, and Regeneration in the United States
Rewegan, Alexander Nicholas
This dissertation offers a critical exploration of cannabis legalization through an ethnographic study of small-scale "legacy" cannabis farmers in Humboldt County, California, as they navigate a complex transition from prohibition to commodity capitalism. I focus on their collective efforts to envision and practice “regenerative agriculture" as a response to both the historical injustices of prohibition and the compounding challenges of climate change. Drawing on history, STS, and the anthropology of food, agriculture, and medicine, I show how the logics of the war on drugs— rooted in carcerality, settler colonialism, and plantation agriculture—structurally and affectively persist in the socalled “post-prohibition” era, frustrating farmers’ efforts to resist monopolization and dispossession. Throughout, I attend to how the pervasive notions of “health,” “sustainability,” and “regeneration” are actively negotiated, modified, and put to use as material and symbolic tools in crafting medicinal, agricultural, and ecological futures. The Introduction weaves a tapestry of themes, histories, and theories that set the stage for the main ethnography. Through a blend of personal narrative, ethnographic vignette, and critical theory, it works to situate cannabis as a fluid and multifaceted object, highlighting people’s ambivalent hopes and cynicisms towards legalization. From alternative farming to molecularized biocapital, it articulates the intersecting influences of climate change, racial capitalism, and Indigenous sovereignties in ongoing projects to commercialize and legalize cannabis in a globally connected United States. Chapter One outlines my research methods and provides a social and narrative history of the study’s fieldsite, grappling with the anthropological complexities and complicities of studying working landscapes in a settler colonial “frontier ecology.” Chapter Two unpacks the shifting and embodied subjectivities of both farmers and workers as they reconfigure themselves in service of licensed production, highlighting sociocultural tensions and contradictions, the structural challenges of regenerative gardening, and the labor dynamics that shape these processes. Chapter Three analyzes how the inchoate and social nature of cannabis regulation both hinders and supports regenerative farming, emphasizing financial strain, and the ever-pervasive role that surveillance technologies are playing in cannabis governance. Chapter Four shifts to the harvest season, exploring farmers’ collective efforts to market their products through the concept of “drug terroir,” unpacking how their values and practices entangled with regional efforts to address wildfires and remediate leftover drug war infrastructures. Chapter Five moves off the farm and onto the topic of consumption as it historicizes the growing scientific literature about cannabis and pregnancy, demonstrating how carcerality continues to infiltrate maternal-fetal health science and conceptions of reproduction and health. The dissertation ultimately explores the ways in which American cannabis legalization often regenerates, rather than resolves, the legacies of prohibition and settler colonialism, while at the same time illuminating alternative and promising practices that might challenge these enduring forces.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158789</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Navigating RAD Conversions: Suggestions for Public Housing Rehabilitation</title>
<link>https://hdl.handle.net/1721.1/158788</link>
<description>Navigating RAD Conversions: Suggestions for Public Housing Rehabilitation
Yan, Yu
Public housing in the United States, a critical resource for nearly 1.7 million residents, faces significant challenges due to aging infrastructure and chronic operating funding shortfalls. The Rental Assistance Demonstration (RAD) program, authorized by Congress in 2012, aims to address these issues by leveraging private financing to rehabilitate and modernize public housing properties. Although the RAD program has been around for more than a decade and leveraged over $18.5 billion of construction investments, close to 75% of the more than 2500 eligible local PHAs are yet to benefit from it. This thesis examines the evolution of RAD programs, including the two newer tools, RAD/Section 18 Blend and Faircloth-to-RAD, and their adoption by public housing authorities (PHAs).&#13;
The research incorporates a review of HUD program and policies, RAD implementation data, and interviews with industry practitioners, including PHAs, developers, and consultants, to understand the hurdles preventing the adoption of the program and the characteristics of successfully structured projects. This thesis offers insights into how specific strategies are used to overcome the hurdles and provides practical recommendations for PHAs seeking to leverage RAD for public housing preservation and development. Key findings highlight the importance of utilizing available funding sources to achieve financial feasibility and enhancing organizational skills and capacity.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158788</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decadal to centennial-scale climate interactions across the Indo-Pacific region</title>
<link>https://hdl.handle.net/1721.1/158787</link>
<description>Decadal to centennial-scale climate interactions across the Indo-Pacific region
Wang, Shouyi
An improved understanding of decadal to centennial-scale climate variability is critical for properly attributing recently observed low-frequency changes to internal climate oscillations and/or anthropogenic forcings as well as improving predictability of decadal variability. This thesis investigates ocean and atmospheric circulation changes and associated impacts within the tropical Indo-Pacific, where low-frequency changes in heat and freshwater impact the livelihoods of billions of people. Because the instrumental record is too short to investigate centennial variability, this thesis leverages numerical simulations and records from paleoclimate archives to provide insights into low-frequency tropical dynamics. In Chapter 2, we explore the dynamics that drive Indonesian Throughflow surface transport variability using a series of forced global high-resolution ocean simulations. We show that surface wind changes associated with Pacific decadal variability drive changes in the western boundary currents that modulate the Indonesian throughflow, consistent with mechanisms identified on interannual timescales. This work identifies a relationship between atmospheric circulation and transport through a key low-latitude passageway. Motivated by paleoclimate evidence of multi-year droughts in Southeast Asia, we investigate their potential drivers in Chapter 3 using an ensemble of coupled climate model simulations. These simulations illustrate that Indo-Pacific internal variability dominated Southeast Asian rainfall extremes during the last millennium, although the influence of volcanic eruptions was detectable. We found that multi-year pluvials were contributed by both Pacific and Indian Ocean modes, while droughts were largely only driven by Pacific Ocean impacts. Our analysis not only quantifies the role of internal and external drivers to Southeast Asia rainfall but also presents a probabilistic analysis framework that may be useful for water resources prediction. Lastly, in Chapter 4 we reconstruct the Indian and Pacific Walker circulations and the Indian Ocean Basin Mode by synthesizing tropical records (corals, tree-rings, and speleothems) of past ocean and atmospheric conditions to investigate basin interactions over the past four centuries. Our results demonstrate that IndoPacific climate was generally coupled on decadal-centennial timescales throughout the past four centuries but was notably decoupled in the early 19th century. Using climate models, we attribute this decoupling to a series of strong volcanic eruptions. Dynamically, we link this inter-basin decoupling to volcanically induced changes in hemispheric temperature gradients, which modulate the teleconnections across the Indo-Pacific. These past disruptions in basin interactions provide context for ongoing and simulated future decoupling under a high emission scenario, as global warming also alters interhemispheric temperature gradients. This thesis sheds light on the complex dynamics that drive ocean-atmosphere variability across the Indo-Pacific on decadal to centennial timescales.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158787</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Systems Architecture and the EVDT Framework&#13;
for Monitoring Methane Emissions in Rio de Janeiro</title>
<link>https://hdl.handle.net/1721.1/158786</link>
<description>Using Systems Architecture and the EVDT Framework&#13;
for Monitoring Methane Emissions in Rio de Janeiro
Ajisafe Jr., Frederick Henry Oladimeji
Methane is a powerful greenhouse gas that has important implications for climate change. Over the past decade, satellites have rapidly improved their ability to detect this gas from above the atmosphere. This Thesis uses two Systems Engineering frameworks, Systems Architecture and EVDT, to examine a case study of methane monitoring in Rio de Janeiro, Brazil. Data from one of these novel satellite systems, GHGSat, is taken over the Seropédica landfill near the city, and compared to Rio’s own IPCC- and GPC-derived greenhouse gas inventory. This is followed by a participant observation in the summer of 2024 involving interviews, discussions, and site visits. A near-doubling of methane was observed over Seropédica, raising questions about the cause of this increase. The direct engagement with Stakeholders provided by this study contributes to a literature gap in satellite monitoring of urban landfills in southeastern Brazil.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158786</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contact-aware and multi-modal robotic manipulation</title>
<link>https://hdl.handle.net/1721.1/158785</link>
<description>Contact-aware and multi-modal robotic manipulation
Zhao, Jialiang
Intelligent robotic manipulation has advanced significantly in recent years, driven by progress in foundational cognitive models, sensor-fusion techniques, and improvements in actuators and sensors. However, most contemporary robotic systems still lack the ability to effectively recognize and understand contact dynamics, which are critical for performing manipulation tasks beyond basic pick-and-place operations. This thesis argues and proves that contact awareness is essential for the successful deployment of robotic systems, not only in structured environments such as factories but also in unstructured settings like domestic households. Achieving contact awareness necessitates advancements in three key areas: the development of improved contact-sensing hardware, the creation of more expressive frameworks for representing and interpreting contact information, and the design of efficient modality-fusion algorithms to integrate these capabilities into robotic action planning. This work addresses these challenges by (1) proposing novel mechanical designs that enable touch sensors to adopt more compact and versatile forms while enhancing their durability and manufacturability, (2) introducing a foundational representation learning framework capable of learning a shared tactile latent representation, which can be transferred across different sensors and downstream tasks, and (3) developing a compositional diffusion-based approach for action prediction that integrates tactile sensing signals with other perception modalities, thereby enabling learning across diverse environments and promoting policy reuse. Along the way, this thesis demonstrates that tactile sensors can be both compact and versatile, challenging common perceptions to the contrary. It also establishes that tactile sensing is indispensable not only for high-precision tasks, such as electronics assembly, but also for everyday activities, including cooking and tool usage.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158785</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formally Verifying Secure and Leakage-Free Systems: From Application Specification to Circuit-Level Implementation</title>
<link>https://hdl.handle.net/1721.1/158521</link>
<description>Formally Verifying Secure and Leakage-Free Systems: From Application Specification to Circuit-Level Implementation
Athalye, Anish
Hardware and software systems are susceptible to bugs and timing side-channel vulnerabilities. Timing leakage is particularly hard to eliminate because leakage is an emergent property that can arise from subtle behaviors or interactions between hardware and software components in the entire system, with root causes such as non-constant-time code, compiler-generated timing variation, and microarchitectural side channels. This thesis contributes a new approach using formal verification to rule out such bugs and build systems that are correct, secure, and leakage-free.&#13;
&#13;
This thesis introduces a new theory called information-preserving refinement (IPR) for capturing non-leakage in addition to correctness and security, implements a verification approach for IPR in the Parfait framework, and applies it to verifying hardware security modules (HSMs). Using Parfait, a developer can verify that an HSM implementation leaks no more information than is allowed by a succinct application-level specification of the device's intended behavior, with proofs covering the implementation's hardware and software down to its cycle-precise wire-I/O-level behavior.&#13;
&#13;
This thesis uses Parfait to implement and verify several HSMs, including an ECDSA certificate-signing HSM and a password-hashing HSM, on top of Ibex and PicoRV32-based hardware platforms. Parfait provides strong guarantees for these HSMs: for example, it proves that the ECDSA-on-Ibex implementation—2,300 lines of code and 13,500 lines of Verilog—leaks nothing more than what is allowed by a 40-line specification of its behavior.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158521</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guiding Deep Probabilistic Models</title>
<link>https://hdl.handle.net/1721.1/158520</link>
<description>Guiding Deep Probabilistic Models
Garipov, Timur
Deep probabilistic models utilize deep neural networks to learn probability distributions in high-dimensional data spaces. Learning and inference in these models are complicated due to the difficulty of direct evaluation of the differences between the model distribution and the target. This thesis addresses this challenge and develops novel algorithms for learning and inference based on the guidance of complex parameterized distributions towards desired configurations via signals from auxiliary discriminative models.&#13;
&#13;
In the first part of the thesis, we develop novel stable training objectives for Generative Adversarial Networks (GANs). We show that under standard unary-discriminator objectives, most of the valid solutions, where the learned distribution is aligned with the target, are unstable. We propose training objectives based on pairwise discriminators that provably preserve distribution alignment and demonstrate improved training stability in image generation tasks.&#13;
&#13;
In the second part of the thesis, we introduce distribution support alignment as an alternative to the distribution alignment objective and develop a learning algorithm that guides distributions towards support alignment. We demonstrate the effectiveness of our approach in unsupervised domain adaptation under label distribution shift. Recent works have shown that under cross-domain label distribution shift, optimizing for distribution alignment is excessively restrictive and causes performance degradation. Our algorithm, which is based on support alignment, alleviates this issue.&#13;
&#13;
In the third part of the thesis, we develop a novel approach to compositional generation in iterative generative processes: diffusion models and Generative Flow Networks (GFlowNets). Motivated by the growing prominence of generative models pre-trained at scale and the high training costs, we propose composition operations and guidance-based sampling algorithms that enable the combination of multiple pre-trained iterative generative processes. We offer empirical results on image and molecular generation tasks.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158520</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Technology Platform for Enabling Next-Generation Vacuum Electronic Devices Based on Silicon Field Emitter Arrays</title>
<link>https://hdl.handle.net/1721.1/158519</link>
<description>A Technology Platform for Enabling Next-Generation Vacuum Electronic Devices Based on Silicon Field Emitter Arrays
Karaulac, Nedeljko
As the demand for electronics with better performance and increased functionality continues to escalate, researchers are finding it more and more difficult to surpass the limitations of conventional transistors due to electron transport in solid-state. Nanoscale vacuum-channel transistors, in which the electron transport channel is vacuum instead of solid-state, offer a potential alternative device architecture beyond device scaling. Due to their ballistic transport and higher breakdown field, nanoscale vacuum-channel transistors are expected to show better performance in a wide variety of high-frequency, high-power, or harsh environment applications. Silicon field emitter arrays (FEAs) are a proven and mature technology that can be implemented as vacuum transistors, and they could also be used in vacuum integrated circuits. Many of the challenges regarding uniformity, reliability, and lifetime have been addressed in this technology. However, the scalability of the emission current remains a challenge. &#13;
&#13;
In this work, we develop a layout-independent fabrication process for silicon FEAs that improves the scalability of emission current with array size. The fabrication process begins by first fabricating field emitters everywhere across the wafer and then selectively etching field emitters to form individual arrays. Using this process, we present for the first time silicon FEAs with array sizes ranging from 1 μm2 to 1 mm2, and we obtain emission current ranging from 1 nA to 1 mA, which represents a range of six orders of magnitude. In order to facilitate design of future vacuum integrated circuits, we develop a circuit model for silicon FEAs based on measurements of the transfer and output characteristics. The circuit model is used to demonstrate a proof-of-concept inverter based on a silicon FEA and pull-up resistor that could potentially be fabricated as a vacuum integrated circuit. Lastly, we characterize and model the statistical variation in emission current to determine if it is feasible to build vacuum integrated circuits using the layout-independent fabrication process presented in this work.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158519</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Sparse Representations for Efficient Planning&#13;
in Uncertain Environments</title>
<link>https://hdl.handle.net/1721.1/158518</link>
<description>Designing Sparse Representations for Efficient Planning&#13;
in Uncertain Environments
Veys, Yasmin
We would like to enable robots to navigate efficiently in large, outdoor environments, where the traversabilities of many regions are unknown prior to planning. If we reason about the uncertainty in the environment instead of assuming that all unknown space is free to move through, we can generate policies that result in, on average, more efficient navigation. However, designing models that enable intelligent and efficient reasoning about environmental uncertainty is challenging. We would like our model to capture the underlying navigation problem and accurately represent the relevant uncertainty, yet remain as sparse as possible, so that planning remains tractable. Higher model expressiveness improves plan quality but reduces computational efficiency in planning, whereas higher model sparsity improves efficiency at the cost of plan quality. Balancing model expressiveness and model sparsity, thus, is crucial for generating high quality plans efficiently. In this thesis, we describe several useful models for planning under uncertainty and justify our decision to use weighted stochastic graphs with probabilistically traversable edges. We then present a novel method of efficiently generating sparse stochastic graphs given coarse information derived from overhead images of our environments. We test our approach in several simulated environments, demonstrating that our graphs effectively trade off between plan quality and planning efficiency for uncertainty-aware agents navigating in the graph. We then deploy our algorithms in a real-world environment on real-world hardware for single-agent and multi-agent teams. We discuss the challenges associated with using our approach in the field and the implications of our model assumptions not matching the real world. Finally, we present preliminary results for adding cost uncertainty to our graph-based representation.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158518</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Power Efficient Analog Front End for Continuous&#13;
Ultrasound Imaging of the Bladder</title>
<link>https://hdl.handle.net/1721.1/158517</link>
<description>A Power Efficient Analog Front End for Continuous&#13;
Ultrasound Imaging of the Bladder
Manohara, Mohith
Continuous bladder monitoring is important for the monitoring of bedridden patients. One method to continuously monitor the bladder is to capture ultrasound images and use machine learning processing to measure the bladder volume from these images. Circuits for implementing these functions can be integrated onto a wearable device, and each of these functions can be integrated onto a single chip. In this thesis, we analyze ultrasound imaging in the context of the bladder to come up with algorithms and hardware to perform continuous bladder monitoring. We first assemble a discrete setup which can form ultrasound images. Using this setup, we describe a new algorithm for generating an ultrasound image by to power gate the hardware during the imaging process to save additional power when capturing the image. We combine these concepts into a single Analog Front End (AFE) chip that can capture images in a power efficient manner.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158517</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum free games</title>
<link>https://hdl.handle.net/1721.1/158516</link>
<description>Quantum free games
Zhang, Tina
The complexity of free games with two or more classical players was essentially settled by Aaronson, Impagliazzo, and Moshkovitz [AIM14]. In the quantum world, there are two complexity classes that can be considered quantum analogues of classical free games: (1) AM⇤, the multiprover interactive proof class corresponding to free games with entangled players, and, somewhat less obviously, (2) BellQMA(2), the class of quantum Merlin-Arthur proof systems with two unentangled Merlins, whose proof states are separately measured by Arthur. In this work, we make significant progress towards a tight characterization of both of these classes. &#13;
1. We show a BellQMA(2) protocol for 3SAT on n variables, where the total amount of communication is Õ(√n). This answers an open question of Chen and Drucker [CD10] and also shows, conditional on ETH, that the algorithm of Brandao, Christandl and Yard [BCY10] for optimizing ˜ over separable states is tight up to logarithmic factors. &#13;
2. We show that AM*[ⁿprovers = 2, q = O(1), a = poly log(n)] = RE, i.e. that free entangled games with constant-sized questions are as powerful as general entangled games. (In contrast, [AIM14] shows that classical free games are much weaker than general classical games.) We show this using a question “hyper-compression” theorem that iteratively applies the introspection technique of Ji et al. [JNV⁺20]. Our result is a significant improvement over the headline result of Ji et al., whose MIP⇤ protocol for the halting problem has poly(n)-sized questions and answers. &#13;
3. By the same techniques, we obtain a zero-gap AM* protocol for a P2 complete language with constant-size questions and almost logarithmically (O(log n · log* n)) large answers, improving on the headline result of Mousavi, Nezhadi and Yuen [MNY21]. &#13;
4. Using a connection to the nonuniform complexity of the halting problem we show that any MIP* protocol for RE requires W(log n) bits of communication. It follows that our results in item 3 are optimal up to an O(log* n) factor, and that the gapless compression theorems of [MNY21] are asymptotically optimal. We conjecture that these bounds can be saturated in the gapped case as well.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158516</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Employing Magnetic Field Records at Key Moments in Planetary Evolution</title>
<link>https://hdl.handle.net/1721.1/158515</link>
<description>Employing Magnetic Field Records at Key Moments in Planetary Evolution
Mansbach, Elias N.
The analysis of the paleomagnetic record in meteorites provides a unique and powerful viewpoint on early solar system and planetary evolution. Indeed, meteorites are the only tangible objects that bore witness to these important events, making their records particularly precious. In this thesis, I present my dissertation work that addresses how the meteoritic paleomagnetic record and additional records from materials provided by return sample missions can be used to study three stages in early solar system and planetary evolution: I) The&#13;
protoplanetary disk; II) The initial melting on the first planetary bodies; and III) The early interior evolution of modern planets. I address Stage I through a paleomagnetic analysis of returned samples from asteroid Ryugu to determine the role of magnetic fields in stellar accretion in the distal solar system (Chapter 2). I address Stage II through a paleomagnetic analysis of the Acapulco primitive achondrite (Chapter 3) and micromagnetic modeling of the ferromagnetic mineral tetrataenite (Chapter 4) to eludcidate core formation on small bodies. Lastly, I address Stage III through preparation for paleomagnetic studies of future returned samples from the Perseverance rover to determine the lifetime and properties of the Martian dynamo (Chapters 5 and 6). I end with a brief conclusion and ideas for future work.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158515</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guiding Navigation of Unknown Environments with Distant Visual Cues</title>
<link>https://hdl.handle.net/1721.1/158514</link>
<description>Guiding Navigation of Unknown Environments with Distant Visual Cues
Fahnestock, Ethan Kendall
While navigating unknown environments, robots rely primarily on proximate features for guidance in decision making such as depth information from lidar or stereo to build a costmap, or local semantic information from images. The limited range over which these features can be used can result in poor robot behavior when assumptions made by motion planning about the cost of the map beyond the range of proximate features misguide the robot. Integrating “far-field” image features that originate beyond these proximate features into the mapping pipeline has the promise of enabling more intelligent and aware navigation through unknown terrain. To navigate with far-field features, key challenges must be overcome. As far-field features are typically too distant to localize precisely they are difficult to place in a map. Additionally, the large distance between the robot and these features makes connecting these features to their navigation implications more challenging. In this thesis we propose FITAM, an approach that learns to use far-field features to predict navigation costs to guide navigation through unknown environments from previous experience in a self-supervised manner. Unlike previous work, our approach does not rely on flat ground plane assumptions or range sensors to localize observations. We demonstrate the benefits of our approach through simulated trials and real-world deployment on a Clearpath Robotics Warthog navigating through a forest environment.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158514</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Visible-Light Liquid-Crystal-Based Modulators and&#13;
Grating-Based Antennas</title>
<link>https://hdl.handle.net/1721.1/158513</link>
<description>Integrated Visible-Light Liquid-Crystal-Based Modulators and&#13;
Grating-Based Antennas
Garcia Coleto, Andres
Current developments in integrated visible-light photonics have led to advancements in applications such as augmented-reality displays and quantum systems. However, the development of crucial integrated-photonics devices such as integrated gratingbased antennas and integrated optical modulators has predominantly focused on the infrared spectrum, leaving a gap in visible-light technologies. This thesis addresses this gap by designing and experimentally demonstrating integrated visible-light liquidcrystal-based (LC-based) modulators and grating-based antennas. First, we provide a thorough design guide for integrated visible-light grating-based antennas and experimentally demonstrate five antennas with varying advanced capabilities, including the first visible-light unidirectionally-emitting grating-based antennas for integrated optical phased arrays (OPAs), facilitating the use of integrated OPAs for new visible-light applications. Second, we discuss the fabrication processes, considerations, and evaluation techniques for successful packaging of integrated LC modulators, supporting the broader integration of LC into silicon-photonics platforms, enabling more compact and efficient on-chip modulation. Third, we experimentally demonstrate the first integrated visible-light LC-based variable-tap amplitude modulators, enabling a compact and low-power solution to integrated visible-light amplitude modulation for high-density integrated visible-light systems. Fourth, we experimentally demonstrate the first 300-mm wafer-scale platform and fabrication process that results in mechanically-flexible photonic wafers and chips, enabling the field of integrated photonics to advance into new application areas that require flexible photonic chips.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158513</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contact Free Monitoring of Cell Density in a Bioreactor with Magnetic Resonance Relaxometry</title>
<link>https://hdl.handle.net/1721.1/158512</link>
<description>Contact Free Monitoring of Cell Density in a Bioreactor with Magnetic Resonance Relaxometry
Gaensbauer, Hans
Frequent, low-latency measurements of bioreactor culture growth are critical for achieving maximum culture efficiency and productivity. Typical cell density and viability measurements are made by removing a sample from the culture, but this approach is both slow and unsuitable for small culture volumes that cannot support frequent destructive sampling. In this work, magnetic resonance relaxometry measurements taken through the walls of the bioreactor tubing are used to monitor the cell density in near real-time. Using intracellular iron as the marker, the system detects variations in cell density in minutes, enabling rapid intervention to save the culture that would be impossible with the once-daily measurements taken by a traditional sampling-based culture analysis system. Given the biochemical importance of intracellular iron, these measurements have the potential to provide phenotypic information on cells without disrupting the bioreactor culture.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158512</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Mechanics for Multi-step Robotic Manipulation Planning</title>
<link>https://hdl.handle.net/1721.1/158511</link>
<description>Leveraging Mechanics for Multi-step Robotic Manipulation Planning
Holladay, Rachel
This thesis focuses on enabling robots to robustly perform complex, multi-step manipulation tasks, like chopping vegetables or wielding a wrench. Completing such tasks requires a robot to plan and execute long sequences of actions, where each action involves many connected, discrete and continuous choices that are critically impacted by constraints relating to force, motion and contact. To tackle this, this thesis contributes models and algorithms that exploit the physics and geometry of the world in order to address the dual challenges of long-horizon decision-making and acting under uncertainty. We apply this in the context of three domains: in-hand manipulation, forceful manipulation and briefly-dynamic manipulation.&#13;
&#13;
First, to reorient a grasped object, we develop a sampling-based motion planner to generate sequences of pushes that slide the object in-hand. We derive an abstraction for pushing to enable the planner to reason about frictional constraints. Second, we focus on forceful manipulation tasks, such as opening a childproof medicine bottle or twisting a nut on a bolt, where the robot's planning choices are impacted by the need to exert force. We define constraints that explicitly consider torque and frictional limits and integrate these into an existing task and motion planning framework. We leverage cost-sensitive planning to enable the robot to generate plans that are robust to uncertainty in the physical parameters. Finally, we frame planning with dynamic actions, like shoveling or toppling, as requiring the robot to reason about both action uncertainty and potential dead ends. We learn a simple action model and formulate a sample-based manipulation planner that guards against dead ends in the face of uncertainty. Throughout this thesis, we validate the practical applicability of our model-based approaches by evaluating them on real robots.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158511</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational and Statistical Detection of High-Dimensional Latent Space Structure in Random Networks</title>
<link>https://hdl.handle.net/1721.1/158510</link>
<description>Computational and Statistical Detection of High-Dimensional Latent Space Structure in Random Networks
Bangachev, Kiril
A probabilistic latent space graph PLSG (n, Ω, D, σ) is parametrized by its number of vertices n, a&#13;
probability distribution D over some latent space Omega,  and a connection function [mathematical function] such that [mathematical formula] almost surely with respect to D. To sample from [mathematical notations], first for each node [mathematical formula] an independent latent (feature) vector x_i is drawn from Omega according to D. Then, for each pair of vertices i and j an edge is drawn independently with probability sigma(x_i,x_j).$ Interest in settings of high-dimensional latent spaces $\Omega$ has surged in recent years due to the rise of high-dimensional data and powerful compute.&#13;
&#13;
The features x₁, x₂, . . . , xₙ are oftentimes hidden due to privacy considerations or absence of measurement. This gives rise to many challenging statistical tasks. A prerequisite for nearly any more sophisticated inference and estimation task is the following simple hypothesis testing question. When can we even test for the presence of high-dimensional latent space structure? When is there a computationally efficient test and what could this computationally efficient test be? We address the following aspects of these questions in the thesis.&#13;
&#13;
Chapter 2: We focus on the canonical geometric setting when latent vectors are distributed uniformly over the sphere [mathematical formula] where Tₚ is such that expected graph density is p. A conjecture that has witnessed continuous interest and progress in the past 15 years is that the information-theoretically optimal test for detecting the spherical random geometric graph is the signed triangle count. We contribute to the existing literature by confirming that the signed triangle count is computationally optimal among low-degree polynomial tests. Our main technical ingredient is a strategy for bounding Fourier coefficients of random geometric graphs based on a representation of spherical random geometric graphs as Erdős-Rényi with few planted edges. This part of the thesis is based on [BB24b].&#13;
&#13;
Chapter 3: The conjectured optimality of the signed triangle count and the relavance of triangle-based statistics to the axiomatic triangle inequality of metric spaces have led to the conventional wisdom that triangle-based statistics are optimal in monotone random geometric graphs. We break this intuition by showing that in the case of a sup-norm geometry over the torus, the signed 4-cycle count is strictly stronger than the signed triangle count and is, furthermore, optimal among low-degree tests. Our main technical contribution is a novel strategy for bounding Fourier coefficients of random geometric graphs mimicking the cluster-expansion formula from statistical physics. This part of the thesis is based on [BB24a].&#13;
&#13;
Chapter 4: While random geometric graphs over the sphere with Euclidean geometry and the torus with sup-norm geometry are interesting mathematically, they are perhaps too simplistic to describe real-world networks. Hence, one should ask to what extent the results and techniques used for these models generalize to other probabilistic latent space graphs. We introduce a new family of probabilistic latent space graphs which we call random algebraic graphs. In random algebraic graphs, Omega is an algebraic group and sigma is compatible with the group structure. This family captures the aforementioned random geometric graphs as well as instances of the stochastic block model and random subgraphs of Cayley graphs. We have two sets of results. First, we develop a general criterion based solely on the magnitudes of Fourier coefficients of sigma for the statistical hardness of detecting a random algebraic graph when the underlying group is the Boolean hypercube. We use this result to provide a uniform approach to many previously known results in the literature, but also highlight that certain structural properties of the connection function such as non-trivial symmetries and non-monotonicity yield novel behavior. Second, we exhibit a universal behavior for the impossibility of detecting a random algebraic graph based solely on the group size but not on the group structure. The result can be equivalently phrased in terms of the local structure of typical Cayley graphs. This part of the thesis is based on [BB23].
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158510</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximation and System Identification Techniques for Stochastic Biomolecular Systems</title>
<link>https://hdl.handle.net/1721.1/158509</link>
<description>Approximation and System Identification Techniques for Stochastic Biomolecular Systems
Grunberg, Theodore W.
Many biomolecular systems can be modeled as chemical reaction networks with a set of relevant species interacting via chemical reactions. When the molecular counts of the species are small, the inherent stochasticity in the occurrence of the reactions plays an important role in the behavior of the system. This stochasticity presents opportunities for system identification, since when a large population of cells is measured, one has many samples from the underlying distribution of the stochastic model. On the other hand, using the stochastic models of chemical reaction networks, given by continuous time Markov chains with countably infinite state spaces, creates computational and analytical difficulties when performing analysis or system identification. Therefore, approximate models that exploit timescale separation between different sets of chemical reactions to create reduced order models, or deterministic or diffusion approximations that approximate the continuous time Markov chain with an ordinary differential equation or stochastic differential equation respectively must be exploited. This thesis makes contributions in both directions, rigorously justifying such approximations as well as developing the theory to perform system identification on the approximate models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158509</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Magnonics in Antiferromagnets and Cavity Spintronic Devices</title>
<link>https://hdl.handle.net/1721.1/158508</link>
<description>Hybrid Magnonics in Antiferromagnets and Cavity Spintronic Devices
Hou, Justin T.
Hybrid dynamic systems combine advantages from different subsystems for realizing information processing tasks in both classical and quantum domains. Magnons, the collective spin wave excitations in magnetically ordered materials, have recently attracted great attention for realizing hybrid dynamic systems. In this thesis, we develop hybrid magnonic systems with reduced complexity, improved scalability, and new functionality. In the first work, by utilizing van der Waals antiferromagnetic material CrCl3, we realize strong magnonmagnon coupling within a single material, simplifying the design of magnon-magnon hybrid systems which conventionally require two magnetic materials. Secondly, by utilizing planar microwave resonators, we realize on-chip, lithographically scalable, and Circuit Quantum Electrodynamics compatible magnon-photon hybrid systems. Strong magnon-photon coupling with three orders of magnitude reduction in spin number is demonstrated due to the reduced effective cavity mode volume. Moreover, the on-chip design, featuring substantial coupling strength, enables the integration of spintronic techniques to control the magnon subsystem dynamics via electrical currents. Along this line, in the third work, we theoretically propose a novel microwave oscillator device: a spin-torque-oscillator maser, which combines a spin-torque oscillator with a resonant cavity. This device aims to overcome the limitations of area, power, and linewidth inherent to traditional spin-torque nano-oscillator devices. In the fourth work, we experimentally realize a tunable magnon-photon hybrid system that leverages the spin-torque effect to electrically modulate magnon dissipation. We observe distinct linewidth modulation effects in systems with different cooperativities. Finally, we suggest methods to enhance the efficiency of magnon dissipation tuning while reducing power consumption, thereby laying groundwork for the development of spin-torque-oscillator masers. This thesis work serves as a foundation for future advancement of hybrid magnonic systems, highlighting their potential for both fundamental research and practical device applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158508</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Factorization and Compositional Generalization in Diffusion Models</title>
<link>https://hdl.handle.net/1721.1/158507</link>
<description>Factorization and Compositional Generalization in Diffusion Models
Liang, Qiyao
One of the defining features of human intelligence is compositionality—the ability to generate an infinite array of complex ideas from a limited set of components. This capacity allows for the creation of novel and intricate combinations of arbitrary concepts, enabling potentially infinite expressive power from finite learning experiences. A likely prerequisite for the emergence of compositionality is the development of factorized representations of distinct features of variation in the world. However, the precise mechanisms behind the formation of these factorized representations in the human brain, and their connection to compositionality, remain unclear. Diffusion models are capable of generating photorealistic images that combine elements not co-occurring in the training set, demonstrating their ability to compositionally generalize. Yet, the underlying mechanisms of such compositionality and its acquisition through learning are still not well understood. Additionally, the relationship between forming factorized representations of distinct features and a model’s capacity for compositional generalization is not fully elucidated. In this thesis, we explore a simplified setting to investigate whether diffusion models can learn semantically meaningful and fully factorized representations of composable features. We conduct extensive controlled experiments on conditional diffusion models trained to generate various forms of 2D Gaussian data. Through preliminary investigations, we identify three distinct learning phases in the model, revealing that while overall learning rates depend on dataset density, the rates for independent generative factors do not. Moreover, our findings show that models can represent continuous features of variation with semi-continuous, factorized manifolds, resulting in superior compositionality but limited interpolation over unseen values. Based on our investigations, we propose a more data-efficient training scheme for diffusion models and suggest potential future architectures for more robust and efficient generative models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158507</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal Representation Learning for Agentic AI Systems</title>
<link>https://hdl.handle.net/1721.1/158506</link>
<description>Multimodal Representation Learning for Agentic AI Systems
Andonian, Alexander
Modern artificial intelligence (AI) is poised to transform the scientific process, from ideation and experimentation to peer review. Many researchers posit that emerging generalist AI “agents” will soon no longer be mere tools, but equal partners in scientific exploration. In this work, we contribute to this evolving landscape through converging lines of research focused on developing and evaluating more efficient and interpretable AI systems, spanning both vision and language domains, and their applications to scientific evaluation and review. Our research focuses on three key areas. First, we introduce a novel framework to enhance the efficiency and robustness of cross-modal representation learning methods. Our approach utilizes progressive self-distillation and soft image-text alignments to model the many-to-many correspondences found in noisy web-harvested datasets. Extensive evaluation demonstrates that our method consistently outperforms CLIP across multiple benchmarks, including improved robustness to natural distribution shifts. We extend this framework to zero-shot open vocabulary detection, introducing augmentation, architectural and self-training strategies for improving vision-text feature alignment. Evaluation on long-tail detection benchmarks demonstrates state-of-the-art performance, with competitive performance for unseen classes, as well as superior transfer to additional datasets. Finally, we present the Review Integrated Scientific Evaluation (RISE) benchmark, a novel framework for assessing AI performance in understanding, critiquing, and providing constructive feedback on scientific manuscripts. Our study compares AI-generated reviews against human expert evaluations, revealing both the promising capabilities and current limitations of AI in scientific peer review. The dissertation concludes by proposing future directions for AI-accelerated science, emphasizing the need for collaborative human-AI scientific communities and the development of evaluation methods for higher-level autonomous capabilities in scientific domains. Altogether, this work contributes to the ongoing discourse on AI’s role in scientific research and paves the way for more rigorous integration of AI systems into the scientific process.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158506</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A biogeochemical investigation of Trunk River Lagoon, Falmouth, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/158505</link>
<description>A biogeochemical investigation of Trunk River Lagoon, Falmouth, Massachusetts
Dumit, Diana
This thesis delves into the intricate dynamics of lipid biomarker creation, deposition and preservation within Trunk River Lagoon, encompassing sediments, microbial blooms, and mats. Through a multi-faceted approach, the research uncovers the interplay between natural processes and anthropogenic influences, shedding light on the evolutionary trajectory of this aquatic ecosystem. From ancient sediment records to contemporary microbial communities, each aspect offers unique insights into environmental changes and the implications for interpreting biomarker signals&#13;
&#13;
In chapter 2 we employ radiocarbon dating, stable isotope geochemistry, and lipid biomarker analyses on a 2-meter sediment core spanning 3000 years, revealing shifts from a freshwater to a brackish environment with evidence of anthropogenic contamination. In Chapter 3, biomarker analyses on active blooms unveil a diverse microbial community receiving organic inputs from various sources, including sewage. Moreover, lipid analyses reveals rapid sulfurization of organic matter in the water column. In Chapter 4, attention turns to the preservation of biolipids within modern microbial mats. Through detailed analysis, the study reveals primary diagenetic processes such as hydrogenation and sulfurization, highlighting the complexities involved in interpreting biomarker distributions accurately. &#13;
 &#13;
Overall, this research underscores the necessity of comprehensive lipid analysis in modern environments to accurately interpret biomarker distribution and abundance. These findings not only advance our understanding of sedimentary records and biomarker signals but also emphasize the complex interplay between natural processes and anthropogenic influences in shaping contemporary aquatic ecosystems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158505</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fabrication and Testing of A Middle-Ear Implanted Microphone</title>
<link>https://hdl.handle.net/1721.1/158504</link>
<description>Fabrication and Testing of A Middle-Ear Implanted Microphone
Wawrzynek, Emma
Cochlear implants are devices that can restore hearing to people with sensorineural deafness. Despite their name, cochlear implants rely on an external unit which contains components such as a microphone. This work presents the design, fabrication, and testing of an implantable middle-ear microphone called the “UmboMic” that measures the displacement of the tympanic membrane at the umbo. Particular consideration is paid to the biocompatability of the microphone and its long-term durability in the body. The work discusses biocompatible materials, methods of encapsulation, and techniques for testing device robustness. &#13;
&#13;
The UmboMic is a piezoelectric displacement sensor that is implanted in the middle ear cavity and contacts the umbo. As the umbo moves, it displaces the UmboMic, resulting in a charge that is amplified with a custom amplifier. The active area of the UmboMic is a triangular shaped cantilever made from two layers of piezoelectric thin film called polyvinylidene fluoride (PVDF). The bimorph design reduces common mode noise as compared to our previous microphone designs. &#13;
&#13;
Extensive bench testing and experiments in fresh human cadavers demonstrates excellent microphone performance despite the use of biocompatible materials. The UmboMic sensor is well shielded against electromagnetic interference, tolerant to implantation variations, and can be repeatably fabricated with little difference between sensor performances. It demonstrates high sensitivity from 100 Hz to above 8 kHz, with a sensitivity of 58 fC/Pa at 1 kHz and 230 fC/Pa at 2 kHz when including the outer ear. The noisefloor of the UmboMic normalized over 1/3 octave bins is 10⁻² fC, and the A-weighted equivalent input noise of the UmboMic with the outer ear is 82.4 dB SPL from 100 Hz to 7 kHz. When tested in five different human cadavers, the UmboMic sensors work reliably despite anatomical differences. &#13;
&#13;
Internalizing the entire cochlear implant would greatly improve the quality of life of wearers. In its current form, cochlear implants cannot be used during sleep and vigorous activity, are susceptible to noise from wind, and function poorly in loud environments. Implanting the device would mitigate these problems and provide users with the discretion of an invisible device. Our prototype demonstrates the feasibility of an implanted microphone and is an important step towards developing a totally implantable cochlear implant.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158504</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Acquisition of Formal Semantics in Statistical Models of Language</title>
<link>https://hdl.handle.net/1721.1/158503</link>
<description>On the Acquisition of Formal Semantics in Statistical Models of Language
Jin, Charles C.
The increasingly impressive performance of recent large language models raises a crucial question: to what extent can such models, trained solely on text, develop an understanding of language grounded in the semantics of the underlying domain? Progress on this question carries significant practical and philosophical implications for the relationship between meaning, understanding, and the capacity to exhibit seemingly intelligent behavior.&#13;
&#13;
This thesis makes two primary contributions. First, it develops a scientifically rigorous approach to studying what statistical models of language can understand about language based on the formal semantics of programming languages. Specifically, it leverages the probing classifiers framework: training small classifiers to find encodings of program semantics within the model's internal representations. A main insight is that the clean separation between syntax and semantics in this domain allows for greater control in experimental design. It introduces two new techniques. The first, semantic probing interventions, is a general methodology for distinguishing whether the probe's measurements reflect (1) the learned representations of the language model encode semantics or (2) that the probe itself has learned to infer semantics from representations of pure syntax. The second, latent causal probing, is a formal framework for probing that provides a robust empirical methodology for studying whether language models are able to access the latent concepts that underlie the text they observe during training. A key innovation is to create a single structural causal model that unifies (1) the data generation process underlying the text used to train the language model and (2) the steps of a probing experiment. This makes it possible to conduct a causal analysis that intervenes on the data generation process to trace the influence of the latent variables in the training data through the model's internal representations.&#13;
&#13;
The second core contribution of this thesis consists of a series of experimental studies. Specifically, we train a language model on a synthetic grid-world navigation task, then probe the model's learned representations for encodings of the unobserved, intermediate world states. By leveraging the techniques we develop, the results deliver strong empirical evidence that statistical models of language are latent concept learners: capable of inducing the latent variables that underlie the generation of their training data, despite being trained only to model a conditional distribution over tokens.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158503</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Language Evolution for Parallel and Scientific Computing</title>
<link>https://hdl.handle.net/1721.1/158502</link>
<description>Language Evolution for Parallel and Scientific Computing
Churavy, Valentin
Scientists, working on the biggest problems facing humanity today, write and run largescale computer simulations. It has been a decades’ long dream of both scientists and programming language designers to make the development for and usage of high-performance computing easier. Many attempts have failed, perhaps because this is a hard problem, perhaps because the social motivation and the required steps to achieve success have not come together, and perhaps solutions to date only solve part of the problem in essence never fully solving the problem. This thesis proposes that there is a combination of features necessary to form a solution. Starting from a bedrock that combines performance with high-level abstractions in a single language. The language needs to enable composable abstractions, or we are doomed to keep developing the single-shot applications of the past. These abstractions should enable code reuse for different forms of compute architectures, to allow users to keep up with the fluid landscape of accelerators. These abstractions should enable code reuse for different mathematical objects such as dense, sparse and structured matrices. These abstractions should enable code reuse for differentiable programming, to enable integration of techniques like sensitivity analysis and scientific machine learning. With the right methodology, these abstractions can compose with each other and specialize to the domain. I will demonstrate that the combination of high-level array-based abstractions and a lowlevel performance portable kernel programming framework form a potent combination for large-scale scientific computing. I will show its efficacy using real-world scientific codes. Furthermore, I will introduce a differentiable programming framework built on top of a general automatic differentiation engine operating on compiler level. The automatic differentiation framework outperforms state-of-the-art, is capable of synthesizing gradient functions from GPU kernels, and can differentiate a wide variety of parallel constructs. As the infrastructure supporting this language needs to be more sophisticated than those of yesteryear, new problems arise. This thesis solves some of these problems and demonstrates their solution on a fluid dynamics code used in climate modelling as one of many imaginable applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158502</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Composing Foundation Models for Decision Making</title>
<link>https://hdl.handle.net/1721.1/158501</link>
<description>Composing Foundation Models for Decision Making
Ajay, Anurag
Recent advancements in conditional generative modeling have enabled models like DALLE and GPT-4 to generate high-resolution images and coherent text from brief prompts. However, developing a foundation model for decision-making is hindered by the scarcity and expense of collecting paired visual, language, and action data. To address this challenge, this thesis proposes a scalable alternative: a compositional model architecture that leverages separately trained expert models specializing in language, vision, and action. By reducing the need for extensive paired data collection, this approach maintains efficiency in solving novel decision-making tasks while mitigating the data scarcity problem. Our compositional foundation model employs a large language model for task planning, a video diffusion model to generate detailed video trajectories, and an inverse dynamics model to map videos into actions. We demonstrate the effectiveness of this approach in the context of table-top manipulation tasks. Furthermore, given the application of foundation models across various embodied agents, there is a growing need for systematically evaluating these models’ "common sense" understanding of the world. This evaluation is crucial for the successful deployment of embodied agents in real-world scenarios. To address this need, we introduce the first open-vocabulary benchmark for Embodied Question Answering (EQA). This benchmark assesses the foundation models’ ability to comprehend and reason about the world. In summary, by addressing data scarcity in developing foundation models for decision-making and establishing a benchmark for evaluating the reasoning capabilities of embodied agents, this thesis aims to advance the development of foundation models for decision-making.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158501</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Unified Framework for Visual Recognition and Generation via Masked Generative Modeling</title>
<link>https://hdl.handle.net/1721.1/158500</link>
<description>Towards a Unified Framework for Visual Recognition and Generation via Masked Generative Modeling
Li, Tianhong
Recognition and generation are two key tasks in computer vision. However, recognition and generative models are typically trained independently, which ignores the complementary nature of the two tasks. In this thesis, we present a unified framework for visual data recognition and generation via masked generative modeling, and further demonstrate its superior power to address challenges across various applications. We will begin with MAGE, a novel framework that unifies image generation and recognition while achieving state-ofthe-art performance on both tasks. We then extend it into vision-language multi-modal training through ITIT, which utilizes unpaired image and text data to train models capable of high-quality, bidirectional image-text generation – the recognition power enables accurate image-to-text captioning, while the generation power enables realistic text-to-image generation. Moreover, inspired by the synergy between image generation and recognition observed in MAGE, we introduce RCG, a framework that enhances the quality of unconditional image generation to the same level of class-conditional generation, by using representations learned in a self-supervised manner to guide the generative process. Lastly, we introduce Reparo to address the challenge of packet loss in video conferencing with the help of masked generative modeling, enabling the reconstruction of lost video data without traditional error correction methods. This ensures high-quality communication even under conditions of substantial data loss. These works demonstrate the power of the proposed unified framework, to not only push forward the state-of-the-art in individual downstream applications but also to provide robust, versatile solutions adaptable to a wide range of real-world problems in computer vision and beyond.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158500</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-cost Agents with Language Perception and Dynamic Inference</title>
<link>https://hdl.handle.net/1721.1/158499</link>
<description>Low-cost Agents with Language Perception and Dynamic Inference
Pan, Bowen
Designing efficient artificial intelligence agents presents significant challenges, particularly in terms of learning and inference costs. Traditional agents often suffer from high learning expenses due to their limited ability to generalize across diverse tasks and environments. Recent advances in large language models (LLMs) have shown strong generalization capabilities by leveraging high-level abstractions of the world through language. In this thesis, we propose leveraging language as a perceptual representation to enable LLM-based agents to perform vision-language navigation tasks with reduced data collection costs. We demonstrate that language not only facilitates the generation of efficient synthetic data but also serves as a bridge to minimize domain gaps between different environments. However, transformer-based agents are burdened with high inference costs, especially when handling long-horizon visual content. To mitigate this, we introduce two strategies: (1) reducing visual input redundancy through dynamic token selection, and (2) accelerating model inference using a memory-efficient Mixture of Experts (MoE) architecture. Together, these approaches offer a robust framework for enhancing both learning and inference efficiency in LLM agents.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158499</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Handwritten Input for Dementia Classification</title>
<link>https://hdl.handle.net/1721.1/158498</link>
<description>Structured Handwritten Input for Dementia Classification
Flores, Gerardo
We explore the use of deep learning to score the Digit Symbol Substitution Test (DSST), a paper-and-pencil behavioral test useful in diagnosing Alzheimer’s. We train a model to classify Alzheimer’s based on the subject’s responses to any one of the 108 queries in the test. We then combine predictions across the test to produce a new classifier that is considerably stronger. We also make an extensive search of architectures and optimization techniques that have proved useful in other settings. The ultimate result is a very strong classifier, with AUC for a response to a single question of 86% and for an overall patient of 97.25%.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158498</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safe and Ethical Implementation of Intelligent Systems</title>
<link>https://hdl.handle.net/1721.1/158497</link>
<description>Safe and Ethical Implementation of Intelligent Systems
Dai, Zheng
In the year 2024, the prospect of solving human level tasks using intelligent systems is no longer the subject of science fiction. As these systems play an increasingly critical role in our day-to-day lives, it becomes ever more important to consider the safety and ethics surrounding their implementation. This is a multifaceted challenge spanning multiple disciplines, involving questions at the regulatory, engineering, and theoretical levels. This thesis discusses three projects that span these levels. We first explore the problem of tracing causal influence from training data to outputs of generative models. In our exploration we encounter the phenomenon of unattributability, and consider its scientific and regulatory implications. We next tackle the challenge of designing a high diversity library of therapeutics that is depleted of dangerous off-target binders using intelligent systems, developing a suite of inference and optimization tools along the way. Finally, we derive universal bounds for the robustness of image classifiers that inform us of how safe these intelligent systems can be in theory. Together, these projects present a multilevel overview of the safe and ethical implementation of intelligent systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158497</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maliciously Secure Computation, Theory and Practice</title>
<link>https://hdl.handle.net/1721.1/158496</link>
<description>Maliciously Secure Computation, Theory and Practice
de Castro, Leo
Data analytics fuels countless innovations and reveals unparalleled insights, and these benefits only grow the more data is amassed. This has resulted in the size of datasets and the compute needed to manage them becoming too resource-intensive for even large companies to handle alone, fueling the rise of cloud computing and outsourced data management. A central problem with this outsourcing is security. How can parties ensure that an untrusted cloud is accurately running the prescribed protocol? More generally, how can two parties collaborate to run a computation over joint inputs, where both inputs remain private while still delivering the correct output? This thesis focuses on answering these questions by constructing secure computation protocols with low communication &amp; computation overhead. The protocols in this thesis include several concretely efficient constructions of private information retrieval, a functional commitment scheme for all functions, and a general two-party secure computation scheme that comes within polylogarithmic factors of the optimal communication and computation complexity. In addition to their efficiency, all protocols presented in this thesis guarantee protection against worst-case, malicious adversaries.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158496</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatially-Adaptive LiDAR and Underwater Communications Using Integrated Optical Phased Arrays</title>
<link>https://hdl.handle.net/1721.1/158495</link>
<description>Spatially-Adaptive LiDAR and Underwater Communications Using Integrated Optical Phased Arrays
DeSantis, Daniel Markus
Silicon-photonics microsystems have enabled advanced optoelectronic capabilities in applications spanning from sensors to communication systems. In particular, integrated optical-phased-array-based (OPA-based) technologies, such as solid-state LiDAR and free-space optical communications (FSOC) systems, show promise to revolutionize the way we sense and communicate. This thesis enables new integrated-OPA-based solid-state beam-steering capabilities for these existing applications, as well as emerging spatially- and spectrally-demanding applications. First, we develop and experimentally demonstrate a novel multi-beam solid-state OPA-based LiDAR system capable of detecting and ranging multiple targets simultaneously, passively, and without rastering. Through this work, we demonstrate a new spatially-adaptive sensing modality for solid-state LiDAR that promises to reduce the data deluge associated with LiDAR sensing for autonomous systems. Second, we show the first, to the best of our knowledge, spiral integrated OPAs, enabling emission of focusing beams with tunable variable focal heights for the first time. This work introduces a first-of-its-kind integrated OPA architecture and, as such, enables new functionality for emerging applications of OPAs that require focusing operation, such as biophotonic optical tweezers and chip-based 3D printers. Third, we show the first visible-light integrated-OPA-based FSOC transmitter and use it to experimentally demonstrate the first integrated-OPA-based underwater-wireless-optical-communication (UWOC) link. This integrated OPA transmitter chip can reduce the size, weight, and mechanical complexity of apparatus for UWOC systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158495</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Spin Dynamics in Magnon and Quantum Spin Systems</title>
<link>https://hdl.handle.net/1721.1/158494</link>
<description>Interactive Spin Dynamics in Magnon and Quantum Spin Systems
Hu, Zhongqiang
Spintronics utilizes the intrinsic spin of electrons to design next-generation electronic devices, reducing power consumption and enabling innovative computing functions. Over the past decades, significant research interest has been directed toward two types of spin-based systems: collective excitations of spins, known as spin waves or magnons, in magnetic materials, and optically active spin defects as represented by nitrogen-vacancy (NV) centers in diamond, leading to the prosperity of magnonics, quantum sensing, and quantum information processing. As the understanding of dynamics in individual spin systems has deepened, recently there has been an increasing interest in the interactive dynamics within hybrid spin systems. This shift in focus reflects an increasing curiosity about how these complex interactions can be harnessed to further advance their microwave and quantum applications. However, several challenges persist, including the limited coherence length of magnons and the restricted frequency range of NV-based magnetometers, which will be tackled in this thesis. We first leverage the chirality of interlayer magnetic dipolar interactions to introduce an easily implementable system—antiparallel aligned magnetic multilayers—for realizing topological magnonic surface states and low-dissipation spin current transport in a tunable manner. We then expand the frequency window of NV-based magnetometers using nonlinear microwave-spin interactions, offering novel functionalities in quantum state control and sensing. We further exploit nonlinear spin dynamics by hybridizing NV centers with magnonic thin films, which not only amplifies the intensity of nonlinear resonance signals that are intrinsic to NV spins, but also enables novel frequency mixings through parametric pumping and nonlinear magnon scattering effects. We believe our study of interactive spin dynamics in hybrid systems involving magnons, quantum spin defects, and microwave photons help optimize these systems for a wide range of applications in both classical and quantum domains.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158494</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling rational agents with limited capability</title>
<link>https://hdl.handle.net/1721.1/158493</link>
<description>Modeling rational agents with limited capability
Jia, Kai
In many scenarios, players exhibit inherent limitations in various aspects of their capability to generate maximally rational play in strategic games. Modeling such capability limitations and elucidating their implications will advance our understanding of the strategic interactions among players. In this thesis, I study two novel settings where players have limited capabilities. I formalize a hierarchy of capabilities and study related equilibrium concepts, computational complexity, solution algorithms, and the impact of varying capabilities on game outcomes.&#13;
&#13;
The first limited-capability setting is limited-perception games. I focus on a class of oneshot limited-perception games. Such games extend simultaneous-move normal-form games by presenting each player with an individualized perception of the true game. Players’ payoffs are determined by the true game hidden from players. The accuracy of a player’s perception is determined by the player’s capability level, with a higher level corresponding to a more accurate perception. I study both capability-oblivious and capability-aware players. A capability-oblivious player does not know they have limited perception and therefore plays the optimal strategy of their perceived game. I present payoff bounds and other predictable behavior of capability-oblivious players in a special class of limited-perception games. A capability-aware player reasons with the set of possible true payoff functions and other players’ perceptions and incentives to maximize their own objective (e.g., the worst-case payoff) based on their limited perception. I present novel formalizations of simultaneousmove equilibria and show the hardness of equilibrium solving. I further present positive results that (i) an approximate equilibrium has a compact, tractable representation; and (ii) a few classes of zero-sum games can be efficiently solved.&#13;
&#13;
The aforementioned efficiently solvable zero-sum games are reduced to solving nonsmooth convex programs. To this end, I present the Trust Region Adversarial Functional Subdifferential (TRAFS) algorithm for constrained optimization of unstructured nonsmooth convex Lipschitz functions. Unlike previous methods that assume a subgradient oracle model, I propose the functional subdifferential, defined as a set of subgradients that simultaneously captures sufficient local information for effective minimization while being easy to compute for a wide range of functions. Intriguingly, the TRAFS design also incorporates game-theoretical thinking. In each iteration, TRAFS solves a zero-sum game between the optimizer and a local approximation of the objective function to guarantee progress. The optimizer has access to step vectors in a local ℓ2 -bounded trust region; the local approximation uses the functional subdifferential. TRAFS finds an approximate solution with an absolute error up to ϵ in O(1/ϵ) or O(\sqrt{1/ϵ}) 1/ϵ iterations depending on whether the objective function is strongly convex, improving the previously best-known bounds of O((1/ϵ)^2) and O(1/ϵ) in these settings. TRAFS makes faster progress if the functional subdifferential satisfies a locally quadratic property; as a corollary, TRAFS achieves linear convergence (i.e., O(log 1/ϵ)) for strongly convex smooth functions. In the numerical experiments, TRAFS solves twice as many problems compared to the second-best method and is on average 39.1x faster on problems solved by both methods.&#13;
&#13;
The second limited-capability setting is limited-strategy games where a player’s capability limits the strategies available to them. I work with a formalization where a player’s strategy space is defined as programs in a Domain-Specific Language (DSL). A player’s capability limits the size of programs available to that player. I focus on characterizing the impact of player capability on game outcomes. I study a new game model called McDncDa derived from network congestion games. I show that it is computationally hard to determine whether an McDncDa instance is capability-positive (i.e., whether increasing a player’s capability level leads to a better payoff). I then study a parameterized special class of McDncDa called MGMG. I show that MGMG is always capability-positive, and it is socially capabilitypositive (i.e., the sum of all players’ payoffs always gets better if every player’s capability level is increased by one) if some resources in the game have increasing returns to scale despite the existence of multiple equilibria.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158493</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributional Private Information Retrieval</title>
<link>https://hdl.handle.net/1721.1/158492</link>
<description>Distributional Private Information Retrieval
Lehmkuhl, Ryan
A private-information-retrieval (PIR) scheme lets a client fetch a record from a remote database without revealing which record it has fetched. Classic PIR schemes treat all database records the same but, in practice, some database records are much more popular (i.e., commonly fetched) than others. We introduce distributional private information retrieval, a new type of PIR that can run faster than classic PIR—both asymptotically and concretely—when the popularity distribution is heavily skewed. Distributional PIR provides exactly the same cryptographic privacy notion as classic PIR. The speedup comes from providing a relaxed form of correctness: distributional PIR guarantees reliable retrieval for PIR queries that follow the popularity distribution, but only “best-effort” retrieval for out-of-distribution queries. We give several constructions of distributional-PIR schemes that make black-box use of existing standard PIR protocols. On a popularity distribution drawn from real-world Twitter data, distributional PIR reduces compute costs by 5.1–77× compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately streaming social-media posts, and show that our PIR schemes reduce the end-to-end server cost by 8×.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158492</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Enhancing Robustness and Generalization in Machine Learning</title>
<link>https://hdl.handle.net/1721.1/158491</link>
<description>Methods for Enhancing Robustness and Generalization in Machine Learning
Schechter, Amit
We propose two methods for improving subgroup robustness and out of distribution generalization of machine learning models. First we introduce a formulation of Group DRO with soft group assignment. This formulation can be applied to data with noisy or uncertain group labels, or when only a small subset of the training data has group labels. We propose a modified loss function, explain how to apply it to data with noisy group labels as well as data with missing or few group labels, and perform experiments to demonstrate its effectiveness. In the second part, we propose an invariant decision tree objective that aims to improve the robustness of tree-based models and address a common failure mode of existing methods for out-of-domain generalization. We demonstrate the benefits of this method both theoretically and empirically. Both these approaches are designed to enhance machine learning models’ performance under distribution shift.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158491</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing the Epistemic Uncertainty of Predictive Action Models and Sampling-Based Motion Planners for Robotic Manipulation</title>
<link>https://hdl.handle.net/1721.1/158490</link>
<description>Characterizing the Epistemic Uncertainty of Predictive Action Models and Sampling-Based Motion Planners for Robotic Manipulation
Shaw, Seiji A.
We derive methods to represent the epistemic uncertainty of models used in long-horizon robot planning problems in autonomous manipulation. We develop a representation of epistemic uncertainty for two types of models: uncertainty over the physical parameters of a model that predicts the observed outcome of a manipulation action and uncertainty over a geometric graph built by a sampling-based motion planner as a representation of the configuration space to answer a motion planning query. We propose a simple planning system that integrates these uncertainty characterizations to reason about the informational value of executing a manipulation action or allocating a number of samples to a sampling-based motion planner.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158490</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Burst Imaging with Learned Continuous Kernels</title>
<link>https://hdl.handle.net/1721.1/158488</link>
<description>Burst Imaging with Learned Continuous Kernels
Biscarrat, Camille
Burst imaging is a technique that consists of taking multiple images in quick succession and merging them into one output image. By aligning and combining data from multiple frames, we can increase resolution, attenuate noise, reduce motion blur and expand the dynamic range to obtain a higher quality image. In this thesis, we propose a method that learns continuous kernels to process and merge burst frames. We show that the learned kernels adapt to local image information and take advantage of sub-pixel sample location information to demosaic, denoise and merge the burst into a high quality output.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158488</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic Weyl Semimetals for Spintronic Applications</title>
<link>https://hdl.handle.net/1721.1/158487</link>
<description>Magnetic Weyl Semimetals for Spintronic Applications
He, Zhiping
Magnetic Weyl semimetals are a category of topological materials that hold promise for spintronic applications due to their unconventional transport properties, which arise from both bulk and surface topological states, as well as the rich interplay between band topology and magnetism. Among the family of semimetallic materials, the antiferromagnetic Weyl semimetals Mn₃X (X=Sn, Ge, etc.) and the ferromagnetic Weyl semimetal Co₂MnGa have attracted significant interest. So far, despite extensive theoretical and experimental investigations, the magnetic dynamics of Mn₃X and the spin-polarized tunneling in Co₂MnGa based spintronic devices remain not fully explored.&#13;
&#13;
In this thesis, I establish a theoretical framework to describe the low energy dynamics of strained Mn₃X. Using perturbation theory, I identify three distinct dynamic modes and derive a Landau-Lifshitz-Gilbert (LLG)-like equation to describe uniform mode dynamics. I also analyze the excitation of dissipative spin waves and the spin superfluidity state in Mn₃X by extending the model to include spatial inhomogeneity. The analytical results are validated against numerical simulations based on fully coupled LLG equations, where good agreement is achieved. In addition, I study fully epitaxial magnetic tunnel junctions (MTJs) composed of Co₂MnGa. By growing Co₂MnGa/MgO/Co₂MnGa stacks under different conditions, I develop a series of MTJs with varying degrees of chemical ordering in the Weyl semimetal electrodes and compare their tunneling magnetoresistance (TMR). I find that the TMR is enhanced with the improvement of the chemical ordering in Co₂MnGa. Our results reveal the relationship between the spin tunneling in MTJs and the chemical order of Co₂MnGa electrodes, offering insights into further enhancing TMR through Weyl semimetal engineering.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158487</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Superparamagnetic Tunnel Junctions for Reliable True Randomness and Efficient Probabilistic Machine Learning</title>
<link>https://hdl.handle.net/1721.1/158486</link>
<description>Superparamagnetic Tunnel Junctions for Reliable True Randomness and Efficient Probabilistic Machine Learning
Koh, Dooyong
Physical devices exhibiting stochastic functions with low energy consumption and high device density have the potential to enable complex probability-based computing algorithms, accelerate machine learning tasks, and enhance hardware security. Recently, superparamagnetic tunnel junctions (sMTJs) have been widely explored for such purposes, leading to the development of limited-scale sMTJ-based systems. Existing sMTJs face significant scalability and reliability issues, however, because their intrinsically low energy barrier and correspondingly small device area result in high sensitivity to external perturbations, as well as large variations from device to device. Here, we present an experimental demonstration of three-terminal sMTJs as reliable and potentially scalable sources of true randomness in the field-free regime. By leveraging dual-current controllability and incorporating feedback, we stabilize the switching operation of superparamagnets and reach cryptographic-quality random bitstreams. The realization of controllable and robust true random sMTJs underpin a general hardware platform for computing schemes exploiting the stochasticity in the physical world, as demonstrated by the generative artificial intelligence example in our experiment. Furthermore, we experimentally demonstrate a novel method of utilizing sMTJs as stochastic analog-to-digital converters (sADCs) in a crossbar array architecture for neural network acceleration, showing performance comparable to software implementations. This work highlights the potential of sMTJs to revolutionize energy-efficient computing and provides a foundation for future advancements in probabilistic computing and hardware security.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158486</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large Language Model Tools for Project-based Learning</title>
<link>https://hdl.handle.net/1721.1/158485</link>
<description>Large Language Model Tools for Project-based Learning
Ravi, Prerna
Project-Based Learning (PBL) has emerged as a prominent educational approach that im- merses students in meaningful, real-world tasks, fostering deep and lasting learning experiences. Unlike traditional instructional methods, PBL emphasizes a student-centered pedagogy, where learners actively construct knowledge through exploration, collaboration, and reflection. This approach not only nurtures a love of learning but also encourages students to form personal con- nections to their academic experiences, making education more relevant and impactful. How- ever, while PBL offers significant educational benefits, it also presents challenges for educators, including the complexities of designing and managing projects, assessing student learning, and balancing student autonomy with guided instruction.. The advent of artificial intelligence (AI), particularly large language models (LLMs), holds promise for addressing these challenges by en- hancing personalized learning, automating administrative tasks, and providing real-time feed- back. To ensure that these AI tools are sustainable and conducive to diverse classroom contexts, it is crucial to involve educators in the design process from the outset.&#13;
&#13;
This thesis contributes to the intersection of PBL and generative AI by documenting a co- design process with interdisciplinary K-12 teachers aimed at integrating AI into PBL pedagogy. Through need-finding interviews, collaborative workshops, and iterative tool design, this re- search explores how AI can support teachers in implementing high quality PBL while maintaining the integrity of student-centered learning. We also investigate how this technology can augment the current roles of teachers without replacing them, and support their professional growth.&#13;
&#13;
The thesis is structured around three key objectives: exploring the challenges educators face with PBL, co-designing AI tools that address these challenges, and proposing design guidelines for future AI tools in PBL classrooms. By refining the design of AI-powered PBL tools, enhancing teacher professional development resources, and ensuring these tools are accessible and equitable, educators will be better equipped to foster engaging, student-centered learning environments. These contributions not only encourage future research and development of AI educational tools, but also aim to foster a more immersive and constructionist learning approach in classrooms.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158485</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Sense of Training Large AI Models</title>
<link>https://hdl.handle.net/1721.1/158484</link>
<description>Making Sense of Training Large AI Models
Ahn, Kwangjun
Today, one of the most impressive applications of optimization is the training of large AI models. But currently such models are trained with ad-hoc heuristics at a very large computational cost, mainly due to lack of understanding of their working mechanisms. In this thesis, we conduct a systematic study of large-model optimization, crucially informed by practical applications. The first part investigates two interesting phenomena regarding optimization of Transformer-based models, one of the most popular architectures for language modeling. We investigate how training Transformer-based models can lead to remarkable properties such as in-context learning, and we further discuss the main challenges associated with Transformer training. The second part of this thesis focuses on understanding the Adam optimizer, one of the most popular algorithms for training large models. We offer a new view on Adam based on an online learning perspective that elucidates the importance of Adam’s algorithmic components. Building on this new perspective, we also prove that Adam achieves the optimal convergence rate in various non-convex optimization settings, both smooth and non-smooth settings. The third part of this thesis focuses on the unstable convergence phenomenon in training large models. We identify its main characteristics from first principles, and discuss its causes and implications for learning. We then discuss its connection to popular flat minima optimization algorithms, and initiate a formal study of them by defining a formal notion of flat minima, and analyzing the complexities of finding them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158484</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convex Network Flows</title>
<link>https://hdl.handle.net/1721.1/158483</link>
<description>Convex Network Flows
Diamandis, Theo
This thesis introduces a new framework for flow problems over hypergraphs. Our problem formulation, which we call the convex flow problem, only assumes that the constraints on the flows over each edge are in some convex set. The objective is to maximize a sum of concave utility functions---one for the net flow at every node and one for each edge flow---subject to these constraints. This framework not only includes many classic problems in network optimization, such as max ﬂow, min-cost ﬂow, and multi-commodity flows, but also generalizes these problems to allow, for example, concave edge gain functions. As a result, our framework includes applications spanning a number of fields: optimal power ﬂow over lossy networks, routing and resource allocation in ad-hoc wireless networks, Arrow-Debreu Nash bargaining, and order routing through financial exchanges, among others. This problem has a number of interesting properties, including a 'calculus' of flow sets, an equivalent conic form, and a natural generalization of many classic network flow results. &#13;
&#13;
We develop an efficient algorithm for solving the convex flow problem by constructing a particular dual problem that decomposes over the edges of the hypergraph. This dual problem has a number of useful interpretations and admits a straightforward specification: the dual function and its gradient can be evaluated using only simple subroutines which often have closed-form solutions. These subroutines suggest a clean, easy-to-use problem interface, which we provide in the open-source software package ConvexFlows.jl, written in the Julia programming language. We discuss implementation considerations, including how to handle important special cases, and we provide a simple interface for specifying convex flow problems. We show that our solver is significantly faster than the state-of-the-art commercial optimization solver Mosek, even for small problems sizes with limited parallelization. &#13;
&#13;
Finally, we consider the nonconvex flow problem with fixed costs on the edges, i.e., where there is some fixed cost to send any nonzero flow over an edge. We show that this problem has almost integral solutions by a Shapley--Folkman argument, and we provide a simple modification of our original algorithm for this nonconvex problem. We conclude by discussing a number of interesting avenues for future work.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158483</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsimonious Principles of Deep Neural Networks</title>
<link>https://hdl.handle.net/1721.1/158482</link>
<description>Parsimonious Principles of Deep Neural Networks
Huh, Minyoung
At the core of human intelligence lies an insatiable drive to uncover the simple underlying principles that govern the world’s complexities. This quest for parsimony is not unique to biological cognition but also seems to be a fundamental characteristic of artificial intelligence systems. In this thesis, we explore the intrinsic simplicity bias exhibited by deep neural networks — the powerhouse of modern AI. By analyzing the effective rank of the learned representation kernels, we unveil the observation that these models have an inherent preference for learning parsimonious relationships in the data. We provide further experimental results to support the hypothesis that simplicity bias is a good inductive bias for finding generalizing solutions. Building upon this finding, we present the Platonic Representation Hypothesis — the idea that as AI systems continue to grow in capability, they will converge toward not only simple representational kernels but also a common one. This phenomenon is evidenced by the increasing similarity of models across domains, suggesting the existence of a Platonic “ideal” way to represent the world. However, this path to the Platonic representation necessitates scaling up AI models, which poses significant challenges regarding computational demand. To address this obstacle, we conclude the thesis by proposing a framework for training a model with parallel low-rank updates to effectively reach this convergent endpoint.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158482</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Strategic AI Agents for Human-centric Multi-agent Systems</title>
<link>https://hdl.handle.net/1721.1/158481</link>
<description>Building Strategic AI Agents for Human-centric Multi-agent Systems
Jacob, Athul Paul
This thesis addresses the challenge of developing strategic AI agents capable of effective decision-making and communication in human-centric multi-agent systems. While significant progress has been made in AI for strategic decision-making, creating agents that can seamlessly interact with humans in multi-agentic settings remains a challenge. This research explores the limitations of current approaches, such as self-play reinforcement learning (RL) and imitation learning (IL), and proposes novel methods to overcome these constraints. Modeling human-like communication and decision making is a crucial first step toward building effective strategic agents. The initial part of the thesis addresses this through two approaches. We start by developing a regret minimization algorithm for modeling actions of strong and human-like agents called piKL, which incorporates a cost term proportional to the KL divergence between a search policy and a humanimitation learned policy. This approach improves reward while keeping behavior close to a human-imitation learned policy, producing agents that predict human actions accurately while improving performance in the benchmark game of no-press Diplomacy. Then, we develop a procedure for modeling populations of agents that communicate with humans using natural language. Our sample-efficient multitask training scheme for latent language policies (LLPs) improves the reward obtained by these policies while preserving the semantics of language in a complex real-time strategy game. Building on these foundations, the second part of the thesis focuses on building strategic agents for human-centric multi-agent domains. The research introduces the DiL-piKL planning algorithm and its extension, RL-DiL-piKL, which regularize self-play reinforcement learning and search towards a human imitation-learned policy. These algorithms enable the training of Diplodocus, an agent achieving expert human-level performance in no-press Diplomacy. A significant milestone is reached with Cicero, the first AI agent to achieve human-level performance in full-press Diplomacy, integrating a language model (LM) with planning and reinforcement learning algorithms based on piKL. The final part of the thesis revisits language generation tasks, applying piKL to model pragmatic communication and improving LM truthfulness. It presents Regularized Conventions (ReCo), a model of pragmatic language understanding that outperforms existing best response and rational speech act models across several datasets. Furthermore, a novel approach to LM decoding is introduced, casting it as a regularized imperfect-information sequential signaling game. This results in the equilibrium-ranking algorithm, which consistently improves performance over existing language model decoding procedures.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158481</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Variation in Healthcare across Time and Providers using Machine Learning</title>
<link>https://hdl.handle.net/1721.1/158480</link>
<description>Characterizing Variation in Healthcare across Time and Providers using Machine Learning
Ji, Christina X.
Modeling healthcare decisions and their outcomes is a complex problem. In addition to being affected by patient characteristics, the prognosis can vary depending on when the patient is receiving care, and treatment decisions can vary depending on who makes the decisions. In this thesis, we consider two axes of variation in healthcare: over time and across providers. For both axes, we focus on identifying when variation exists, characterizing the patients who are affected by such variation, and addressing shifts due to this variation. The solutions we propose draw ideas from causality and dataset shift.&#13;
&#13;
In the first part of this thesis, we address these three aspects for variation over time. First, we create an algorithm that can detect when a model is affected by change over time and identify sub-populations where the model is more affected. We use our algorithm to perform a large-scale study of temporal shifts in health insurance claims. We demonstrate changes over time are prevalent in healthcare and examine case studies to better understand the drivers of such changes. Next, we examine how to learn a model that can perform well on current data. As data from the current time period is limited, we consider several methods that can leverage sequences of historical data to learn a good image classification model for the final time step. We build a benchmark for evaluating these methods on sequences constructed from synthetic shifts and validate our conclusions on a real-world dataset.&#13;
&#13;
In the second part of this thesis, we address similar questions for variation across providers. First, we create a statistical approach to test whether significant variation exists across providers. Our approach involves learning a model of treatment decisions with provider-specific random effects. We perform a case study on first-line type 2 diabetes treatment and find significant variation exists across providers. Then, we develop an algorithm for identifying regions of patients with the most disagreement between providers. We formalize this as a causal inference problem, where disagreement is defined by the causal effect of the provider on the treatment decision. We illustrate this algorithm on first-line type 2 diabetes and Parkinson's treatment decisions and uncover regions of variation that align with uncertainty in clinical guidelines.&#13;
&#13;
In the third part of this thesis, we build a tool for examining the effects of variation over time or across providers for individual patients. We use a large language model built on electronic health record concepts to generate patient trajectories. To enable interventions on time and provider, we introduce new tokenizations for these concepts. We also incorporate a structural causal model for patient visits to allow for generation of interventional and counterfactual trajectories. We hope the model in this part of the thesis can be used to answer additional questions about how patient trajectories would change if they were treated during a different time period or by a different provider.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158480</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Interactions between Optical Fields and Atom-like Systems in Integrated Circuits</title>
<link>https://hdl.handle.net/1721.1/158479</link>
<description>Programmable Interactions between Optical Fields and Atom-like Systems in Integrated Circuits
Larocque, Hugo
Photons can interact with a wide variety of quantum systems and their ability to more easily preserve their coherence makes them ideal candidates for transmitting information between remote quantum information processors. Photonic integrated circuits (PICs), which can be manufactured with modern semiconductor fabrication, provide a platform in which such interactions can occur at scale. Implementing integrated devices enabling these interactions within programmable and scalable settings while preserving a sufficient amount of strength continues to be a general goal in quantum photonics. Here, we implement device designs and architectures that improve current limits on the programmability and scalability of three types of optical interactions. More specifically, we explore the use of programmable multimode interference as a means for unitary transformations onto a set of optical spatial modes, optical resonators for high-extinction coherent modulators driven by RF signals, and large-scale silicon photonics for interacting with hybrid integrated quantum dot emitters.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158479</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Textiles for Physical Interactions</title>
<link>https://hdl.handle.net/1721.1/158478</link>
<description>Intelligent Textiles for Physical Interactions
Luo, Yiyue
Human-environment interaction is a fundamental aspect of our daily lives, involving the constant use of our sensory and motor systems to extract, process, and communicate information. However, capturing, analyzing, and reproducing these interactions pose significant challenges due to their pervasive, variable, and prolonged nature, as well as their unique character for each individual. Despite these challenges, it is essential to develop systems that can accurately capture and reproduce human-environment interactions for a wide range of applications, including human behavior studies, health monitoring, human-computer interactions, and robot imitation learning. This thesis focuses on developing seamlessly integrated, scalable manufactured sensing and actuating systems, as well as advanced computational pipelines to capture, analyze, and reproduce adaptive ubiquitous physical human-environment interactions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158478</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse and Structured Tensor Programming</title>
<link>https://hdl.handle.net/1721.1/158477</link>
<description>Sparse and Structured Tensor Programming
Ahrens, Willow
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear grids of values. Real-world tensors are often structured, containing patterns which allow us to optimize storage or computation, such as sparsity (mostly zero), runs of repeated values, or symmetry. Specializing implementations for structure yields significant speedups, but support for structured tensors is fragmented and incomplete. The heart of the problem is coiteration, simultaneously iterating over multiple tensors in a program, where each tensor format may have different internal structure. As each combination of structures requires a unique coiteration algorithm, existing frameworks struggle to abstract over the design space, instead hard-coding support for a few programs and/or a few structures. In this thesis, we build an abstraction for coiteration, enabling us to support both a wide range of programs and diverse tensor structures. We use a language, looplets, to describe the structure of tensors in tensor programs. Looplets allow the compiler to generate code to coiterate over any combination of structured tensor formats. The looplets language decomposes loops over sparse and structured formats hierarchically. This decomposition simplifies compilation, allowing us to capture key mathematical properties (such as x∗0 = 0, which motivates sparsity) with simple term rewriting. Building on looplets, we introduce a new language, Finch, for general structured tensor programming. Finch makes it easier to compute with structured tensors by combining program control flow and tensor structures into a common representation where they can be co-optimized. Finch automatically specializes control flow to data so that performance engineers can focus on experimenting with many algorithms. Finch supports a familiar programming language of loops, statements, ifs, breaks, etc., over a wide variety of tensor structures, such as sparsity, run-length-encoding, symmetry, triangles, padding, or blocks. Finch reliably utilizes the key properties of each structure, making it easier to write and optimize structured tensor programs. In our case studies, we show that this leads to dramatic speedups in diverse applications, including linear algebra, image processing, and graph analytics. Our abstracted design makes it easier to extend Finch to new tensor structures and programming models. Finch has been separately extended to support a DSL for symmetry-aware tensor programs and to support real-valued indexing.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158477</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paths to AI Accountability: Design, Measurement, and the Law</title>
<link>https://hdl.handle.net/1721.1/158476</link>
<description>Paths to AI Accountability: Design, Measurement, and the Law
Cen, Sarah H.
Algorithmic systems are increasingly intervening on human interactions and decisions, from selecting the content users see on social media to helping hiring managers choose candidates to interview. In recent years, the falling barrier between humans and AI has sparked fears about AI’s capabilities and elicited questions about the role that algorithms and, increasingly, AI should play in our lives. As society continues working towards answering these questions, this thesis argues that we must construct paths to AI accountability by determining who owes responsibility to whom in the AI ecosystem, upholding these responsibilities, and enforcing them. Pursuing AI accountability allows us to innovate while still acknowledging that AI is a technology developed and wielded by human actors. Furthermore, by focusing on the responsibilities of human actors, this approach builds on existing social and legal frameworks of accountability. Within this vast, multidisciplinary research area, this thesis centers on three aspects of AI accountability: design, measurement, and the law. In Part I, we examine the importance of designing responsible AI systems from the ground up, which involves exploring definitions of responsibility, methods for achieving them, and the ramifications (e.g., trade-offs) of responsible design. As demonstrations of design, we study three different contexts. Each context builds on a notion of responsibility, and we investigate how these notions—which include trustworthiness, fairness, and social welfare—arise and interact. We provide formal definitions of each notion, discuss their implications, and propose interventions for achieving them. In Part II, we turn our attention to AI measurement: quantifying AI behaviors and effects through systematic observations and procedures. We illustrate the importance of AI measurement through three case studies: (i) a black-box audit for social media algorithms; (ii) an estimator and experiment design for individual treatment effect estimation in the presence of spillover; and (iii) a user study testing whether users adapt to their recommender systems. In this part, we show how measurement can play a crucial role in compliance testing, analyzing AI behavior, and producing evidence that can inform decision-making (e.g., policy). In Part III, we discuss how the law can align incentives with AI accountability as well as challenges in realizing AI accountability in practice. We center our discussion on two works. The first seeks to fill a gap in the law around AI that arises from AI’s unintuitive and opaque nature, and argues that AI decision-subjects have a substantive right in the age of AI that we term the “right to be an exception.” While the first work studies a gap in the law, the second tackles practical challenges in carrying out the law. It examines how lacking both transparency and access to AI systems can frustrate the ability to monitor, evaluate, and audit AI systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158476</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Utilization and Synthesis of Symbolic World Models for Safe, Generalizable, and Efficient Action</title>
<link>https://hdl.handle.net/1721.1/158475</link>
<description>Utilization and Synthesis of Symbolic World Models for Safe, Generalizable, and Efficient Action
Hunt, Nathan
Reinforcement learning with neural networks has proven incredibly flexible at learning to act in diverse environments. Model-based RL techniques have helped to ameliorate the dependence on large quantities of data that these models normally have. However, despite their flexibility, neural world models have several drawbacks. Symbolic world models, in comparison, are easier to verify (e.g. for safety concerns), more compatible with domain-independent planning techniques, and able to be learned or adapted with more limited data. In this thesis, I will demonstrate these advantages of symbolic world models in three projects. The first, VSRL, shows how we can use a symbolic world model to ensure that an RL policy is safe during both training and deployment and promote safe exploration. The second, SPARSER, presents a hybrid domain planner which uses world models in a planning domain description language. It showcases how we can exploit the event structure in the world model to enable more efficient planning. In the final project, PWM, I will explore learning a world model directly from observations and actions gathered from interacting with an environment. We combine symbolic and neural synthesis techniques to enable efficient world model synthesis even from visual observations. Together, these projects demonstrate the versatility and value of symbolic world models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158475</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable and Automated Bias Detection for AI in Healthcare</title>
<link>https://hdl.handle.net/1721.1/158474</link>
<description>Interpretable and Automated Bias Detection for AI in Healthcare
Alexiev, Christopher
Biases in artificial intelligence systems and the data they operate over are a major hurdle to their application in clinical and biomedical settings. Such systems have frequently been shown to fail to generalize from their training data to the real world environment and often display differing levels of accuracy over different population subgroups, which has detrimental effects on patients' quality of care and on healthcare equality. Here, we introduce an automated framework for identifying and understanding nontrivial sources of bias in healthcare datasets and AI models. Our framework is data and model agnostic and does not rely on human-developed heuristics or assumptions to uncover bias. We demonstrate its effectiveness by uncovering serious and nontrivial sources of bias in three widely used clinical datasets and one biomedical dataset, over the diverse tasks of diabetes risk prediction, lung cancer risk prediction, and biomolecular toxicity prediction. Our framework is used to uncover biases caused by patient BMI and computed tomography (CT) scanner type in the data used by a cutting-edge lung cancer risk prediction AI model, causing AUC drops on the order of ten percent.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158474</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Characterisation of Strain and Defects in 2D Photonic Materials</title>
<link>https://hdl.handle.net/1721.1/158473</link>
<description>Optical Characterisation of Strain and Defects in 2D Photonic Materials
Mukherjee, Abhishek
Strain and defect engineering have shown to be powerful tools in modifying optoelectronic properties of semiconductors. This thesis aims to advance the fundamental understanding of electronic and optical properties in material systems with broken inversion symmetries and to use this understanding to engineer in-situ, localized strain fields for tailoring photonic responses at the nanoscale. We will address the fundamental question: How can we characterize the effect of strain and defects in two-dimensional photonic materials? To this end, we open with a review of current strategies in strain engineering, its fundamental consequences on electronic, optical, and magnetic properties, and the state-of-the-art applications of this technology in achieving band-gap-engineered straintronic devices. Touching on the advent of strain engineering for flexoelectricity - a spontaneous material polarization produced by a strain gradient that lifts the inversion symmetry, which can enable a bulk photogalvanic effect, we posit the aspect of meta-valent bonding in materials having a key role in this, by showing that the majority of prime material candidates known to have exhibit large photogalvanic response exhibit this characteristic. The rest of the thesis focuses on characterizing layered metal thio(seleno)phosphates, a family of materials known for their magnetic, electronic, and nonlinear optical properties. We show how the optical properties of these materials can be modulated via different means of defects and strain. These photoactive materials can be pivotal to a future comprising of strain-engineered flexoelectric devices, which take advantage of the bulk photogalvanic effect, to develop a new family of practical, deployable, self-powered, and low-cost photodetectors, and integrated arrays with limits-breaking performance in the UV-to-LWIR spectral bands.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158473</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The growth characteristics of indium antimonide as revealed by chemical etching and x-ray anomalous transmission.</title>
<link>https://hdl.handle.net/1721.1/158472</link>
<description>The growth characteristics of indium antimonide as revealed by chemical etching and x-ray anomalous transmission.
Miller, David Christopher.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Vita.; Bibliography: leaves 122-127.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158472</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photochemistry of organometallic compounds : generation and reactions of 16e- and 17e- intermediates</title>
<link>https://hdl.handle.net/1721.1/158471</link>
<description>Photochemistry of organometallic compounds : generation and reactions of 16e- and 17e- intermediates
Young, Kent Maxwell.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1989; Title as it appears in the M.I.T. Graduate List, June 1989: Photochemistry of organometallic complexes--generation and reactions of 16e- and 17e- intermediates.; Includes bibliographical references (leaves 164-166).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158471</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-accuracy, speed-optimized positioning system for electron beam lithography</title>
<link>https://hdl.handle.net/1721.1/158470</link>
<description>High-accuracy, speed-optimized positioning system for electron beam lithography
Dadok, Luděk.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1982; Vita.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158470</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A model of flow through an aqueous vein</title>
<link>https://hdl.handle.net/1721.1/158469</link>
<description>A model of flow through an aqueous vein
Yuan, San Shing.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Bibliography: leaf 44.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158469</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transfer function of heavy duty gas turbine combustor components.</title>
<link>https://hdl.handle.net/1721.1/158468</link>
<description>Transfer function of heavy duty gas turbine combustor components.
Farrell, Thomas Dominic.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158468</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of iron tricarbonyl complexes.</title>
<link>https://hdl.handle.net/1721.1/158467</link>
<description>A study of iron tricarbonyl complexes.
Fanelli, Joseph John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158467</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effect of braking on automobile vehicle dynamics.</title>
<link>https://hdl.handle.net/1721.1/158466</link>
<description>The effect of braking on automobile vehicle dynamics.
Evans, David Gordon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158466</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fractographic investigation of crack-closure.</title>
<link>https://hdl.handle.net/1721.1/158465</link>
<description>Fractographic investigation of crack-closure.
Faral, Michel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158465</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An accident and seismic containment reliability study including statistical uncertainty</title>
<link>https://hdl.handle.net/1721.1/158464</link>
<description>An accident and seismic containment reliability study including statistical uncertainty
Fardis, M. N.
            (Michael N.)
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1978; Bibliography: leaves 180-183.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158464</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A piezoelectric force measuring system for human mobility analysis.</title>
<link>https://hdl.handle.net/1721.1/158463</link>
<description>A piezoelectric force measuring system for human mobility analysis.
Estey, Paul Norman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Bibliography: leaves 178-182.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158463</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forced vibrations of a single stage axial compressor rotor.</title>
<link>https://hdl.handle.net/1721.1/158462</link>
<description>Forced vibrations of a single stage axial compressor rotor.
Fabunmi, James Ayinde.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158462</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the murine leukemia virus genome.</title>
<link>https://hdl.handle.net/1721.1/158461</link>
<description>Mapping the murine leukemia virus genome.
Faller, Douglas Vincent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1978; Vita.; Bibliography: leaves 182-188.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158461</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental groups of algebraic stacks</title>
<link>https://hdl.handle.net/1721.1/158460</link>
<description>Fundamental groups of algebraic stacks
Noohi, Behrang,
            1973-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2000; Includes bibliographical references (p. 57).
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158460</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The kjeldahl nitrogen process for well waters</title>
<link>https://hdl.handle.net/1721.1/158459</link>
<description>The kjeldahl nitrogen process for well waters
Fuller, George W.
            (George Washington),
            1868-
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158459</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of brom and nitroso phenols</title>
<link>https://hdl.handle.net/1721.1/158458</link>
<description>A study of brom and nitroso phenols
Carney, James Andrew.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1890
</description>
<pubDate>Wed, 01 Jan 1890 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158458</guid>
<dc:date>1890-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oxidation of olive oil by metal catalysts</title>
<link>https://hdl.handle.net/1721.1/158457</link>
<description>Oxidation of olive oil by metal catalysts
Hart, Morris.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1921; Includes bibliographical references (leaf 16).
</description>
<pubDate>Sat, 01 Jan 1921 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158457</guid>
<dc:date>1921-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of methods of determining flood damages and of evaluating flood control benefits</title>
<link>https://hdl.handle.net/1721.1/158456</link>
<description>A study of methods of determining flood damages and of evaluating flood control benefits
Lampert, James B.
            (James Benjamin),
            1914-1978.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1939; Includes bibliographical references (leaf 101).
</description>
<pubDate>Sun, 01 Jan 1939 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158456</guid>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic scattering of neutrons by hexagonal cobalt</title>
<link>https://hdl.handle.net/1721.1/158455</link>
<description>Magnetic scattering of neutrons by hexagonal cobalt
Moon, R. M.
            (Ralph Marks)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1963; Vita.; Includes bibliographical references (leaves 97-98).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158455</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An analytical study of the effects of vibrations on heat transfer from a heated horizontal cyclinder</title>
<link>https://hdl.handle.net/1721.1/158454</link>
<description>An analytical study of the effects of vibrations on heat transfer from a heated horizontal cyclinder
Chiang, Tom.
Thesis: Mech. E., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1961; Includes bibliographical references (leaves 31-32).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158454</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diffusion coefficients of styrene in mayonnaise and yogurt</title>
<link>https://hdl.handle.net/1721.1/158453</link>
<description>Diffusion coefficients of styrene in mayonnaise and yogurt
Huang, Wendy.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1980; Includes bibliographical references (leaf 74).
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158453</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal shock resistance of ceramics.</title>
<link>https://hdl.handle.net/1721.1/158452</link>
<description>Thermal shock resistance of ceramics.
Goodof, Robert Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1973; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158452</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatiotemporal Signatures of Elastoinertial Turbulence</title>
<link>https://hdl.handle.net/1721.1/158324</link>
<description>Spatiotemporal Signatures of Elastoinertial Turbulence
Yamani, Sami
The addition of small amounts of polymers to a Newtonian solvent makes the fluid viscoelastic, and can lead to significant drag reduction in high-speed flows. The interaction of viscoelasticity and inertia in a dilute polymer solution results in the emergence of unique inertioelastic instabilities. The nonlinear evolution of these instabilities engenders a state of turbulence with significantly different spatiotemporal features compared to Newtonian turbulence, commonly termed elastoinertial turbulence (EIT). We explore EIT by studying the dynamics of low-speed submerged jets of dilute aqueous polymer solutions injected through a nozzle into a tank of quiescent water or polymer solution. In a free shear layer, fluid elasticity has a dichotomous effect on jet stability depending on its relative magnitude, creating two distinct regimes in which elastic effects can either destabilize or stabilize the jet. For small levels of elasticity an inertioelastic shear-layer instability emerges, in agreement with existing linear stability analysis of viscoelastic jets, which is independent of bulk undulations in the column of fluid forming the jet. The growth of this instability near the edge of the jet destabilizes the flow, advancing the transition to turbulence to lower Reynolds numbers and closer to the nozzle compared to a Newtonian jet. Increasing the fluid elasticity merges this shear-layer instability into a bulk instability of the fluid column. In this regime, elastic tensile stresses in the sheared polymer solution act like an “elastic membrane” that stabilizes the flow, delaying the transition to turbulence to higher levels of inertia and greater distances downstream of the nozzle. In a wall-bounded shear layer, a separate investigation shows that fluid elasticity generates a self-sustained inertioelastic travelling wave within the wall boundary layer under flow conditions at which a Newtonian wall jet remains completely laminar. The phase velocity of this travelling wave decreases as fluid elasticity increases, resulting in the stabilization of the jet. In the fully-developed turbulent state far from the nozzle, viscoelastic jets exhibit unique spatiotemporal features associated with EIT. The time-averaged angle of jet spreading and the center-line velocity of the jet are self-similar with distance from the nozzle, and the similarity scaling coefficients vary with fluid elasticity. The cascade of turbulent eddies has a universal frequency spectrum independent of fluid elasticity. This spectrum is characterized by a power law with an exponent of −3 that is different from the well-known Kolmogorov law with exponent −5/3 for Newtonian turbulence. EIT also modifies the Lagrangian coherent structures that develop in the turbulent flow. Increasing elasticity generates coherent structures that are larger and more elongated in the streamwise direction, consistent with the suppression of streamwise vortices by EIT. On a larger scale, the elongated coherent structures create a stochastic cycle in EIT that consists of active and hibernating turbulent states with alternating strong and weak turbulent fluctuations. Looking ahead, this new fundamental understanding of EIT can be leveraged to explore the potential of biopolymers as cheap and environmentally-friendly drag reducing agents replacing synthetic polymers made from petroleum oil. Biopolymers are typically semiflexible polyelectrolytes with rheological properties that can be adjusted over a wide range by varying conditions such as the solvent quality and/or the ionic strength. We study aqueous solutions of a typical long chain biomacromolecule (Xanthan gum) in canonical shear and extensional flows and quantify how the rheological properties can be tuned by changing the ionic strength of the solvent. In steady shear flow, increasing the biopolymer concentration dramatically increases both the zero shear viscosity and the extent of shear-thinning, while increasing the ionic strength of the solvent, decreases both the zero shear viscosity and the level of shear-thinning. In transient extensional flow, increasing biopolymer concentration increases the extensional relaxation time of the solution, while increasing the ionic strength of the solvent decreases this relaxation time. Based on our insights from this rheological characterization, we demonstrate that injecting a high inertia jet of aqueous biopolymer solution into quiescent environments at different levels of ionic strength can significantly modify the spectral characteristics of the inertioelastic instabilities that develop and lead to a change in the spatiotemporal signatures of elastoinertial turbulence. Our findings lay out a pathway for identifying the most promising biopolymers to serve as biodegradable drag reducing agents for marine vehicles operating in high salinity environments enabling savings in the cost of transport and future reduction in our carbon footprint.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158324</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale design of bioadhesive platforms for next-generation applications in surgery and healthcare</title>
<link>https://hdl.handle.net/1721.1/158323</link>
<description>Multiscale design of bioadhesive platforms for next-generation applications in surgery and healthcare
Wu, Sarah J.
Bioadhesives—materials capable of adhering to biological tissues—hold significant promise as transformative tools in healthcare, offering the ability to repair tissues with ease and minimal damage. These materials present numerous opportunities in surgery and human-machine interfaces, creating a broad landscape of applications that has captivated clinical and scientific interest alike. Still, there remain open challenges surrounding their reliability, biocompatibility, usability, and versatility. These include weak adhesion with wet tissues, foreign body response, cumbersome application processes, and limited customizability. This dissertation presents a multiscale framework for addressing these obstacles, encompassing design strategies on the molecular, polymer network architecture, macroscale device, and application process levels. The implementation of this framework is demonstrated through the development of two pioneering bioadhesive platforms: (1) a multifunctional patch for minimally invasive surgery, and (2) a 3D printable bioadhesive for fabricating tunable, application-specific devices. Together, these platforms expand the design space for creating robust and versatile tissue repair solutions and biomedical devices.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158323</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical Material Recovery from Salt-Lakes and Spent Batteries with Membranes and Solvents</title>
<link>https://hdl.handle.net/1721.1/158322</link>
<description>Critical Material Recovery from Salt-Lakes and Spent Batteries with Membranes and Solvents
Foo, Zi Hao
The sustainable extraction and recovery of critical metals such as lithium, cobalt, and rare earth elements are essential for advancing renewable energy technologies, electric vehicles, and modern electronics. This thesis addresses the significant environmental, economic, and logistical challenges associated with traditional methods of extracting these metals from primary sources like spodumene ores and continental salt lakes, and secondary sources like spent battery and magnet leachates. Conventional extraction processes from primary sources are highly energy-intensive, environmentally taxing, and pose substantial water usage concerns. In contrast, while secondary sources such as spent lithium-ion batteries offer a promising avenue to alleviate environmental impacts and secure a stable supply chain, they still pose challenges in terms of high chemical usage and waste acid management. This research focuses on advancing three innovative processes: nanofiltration, electrodialysis, and solvent-driven fractional crystallization, aiming to enhance the efficiency and sustainability of metal recovery from both primary and secondary sources. The thesis findings are supported by direct experimental measurements and extensive computation involving multi-ionic and mixed-solvent activity and fugacity coefficient models, fundamental molecular dynamics simulation, multicomponent continuum dynamics ion transport models across nanofiltration and ion exchange membranes, and techno-economic analysis of membrane and solvent processes. First, advancements in nanofiltration technology are explored to pre-treat salt-lake brines for improved lithium extraction efficiency and purity. Positively charged nanofiltration membranes demonstrate enhanced monovalent selectivity through Donnan exclusion, effectively removing multivalent cations and improving lithium purity in the feed brine. Our results show that the Li/Mg selectivity can be enhanced by 13 times with Donnan-enhanced nanofiltration membranes. Our experiments exemplify the Donnan-enhanced membrane’s ability to reduce magnesium concentrations to 0.14 % from salt lakes in a single filtration stage. This method not only increases the yield and quality of extracted lithium but also reduces the environmental impact by minimizing additional purification steps. Second, electrodialysis is investigated for the selective recovery of lithium from complex mixtures like battery leachates. This technique leverages ion mobility differences to retain lithium ions while separating other cations. Bipolar membrane electrodialysis further converts lithium chloride into high-purity lithium hydroxide and hydrochloric acid, which can be recycled, thereby supporting a circular economy in battery recycling. Experimental results demonstrate that selective electrodialysis can achieve ∼99 % lithium purity with 68.8 % lithium retention from Ni-Mn-Co battery leachates. The techno-economic analysis projects LiOH production costs between USD 1.1 to 3.6 per kilogram, approximately an order of magnitude lower than prevailing market prices. Third, the use of dimethyl ether (DME) in solvent-driven fractional crystallization is examined as an innovative method for extracting critical metals. DME’s properties allow for efficient water extraction from aqueous solutions, causing the crystallization of metals like cobalt and nickel. Our computational analysis reveals that DME-based solvent-driven water extraction can concentrate an input saline feed to 5.5 M and regenerate over 99 % of the DME using ultra-low-grade heat below 50°C, with a DME/water selectivity ratio of 125. This process ensures high purity and reduces post-processing needs, offering a more environmentally friendly alternative to traditional solvent extraction techniques. The findings of this thesis underscore the potential of advanced variants of nanofiltration, electrodialysis, and solvent-driven fractional crystallization technologies in promoting sustainable and economically viable critical metal recovery processes. By addressing the pressing issues of environmental degradation and resource scarcity, this research supports the development of a circular resource economy, where waste materials are continuously reused and recycled, contributing to a sustainable energy future.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158322</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Innovative Structural and Mechanical Satellite Systems</title>
<link>https://hdl.handle.net/1721.1/158321</link>
<description>Innovative Structural and Mechanical Satellite Systems
Thomas, Annika
This thesis covers two topics within the field of satellite mechanical engineering. The first topic covered is the structural and thermal design and validation BeaverCube 2 Earth-imaging CubeSat. The second topic covered is the electromagnetics modeling and simulation of inductive spin drive for a novel magnetically levitated spherical control moment gyroscope for satellite attutide control.&#13;
&#13;
For the first topic on BeaverCube 2, the key tasks were to design and assemble the structure of the CubeSat, ensure that subsystems maintain their operating temperatures on orbit, and validate the structural integrity of the CubeSat structure during launch. We design and manufacture 24 components that integrate all subsystems of BeaverCube 2 and meet the size requirements of a 3U (3 x (100cm3)) CubeSat, including a chassis, panels, payload structure and connectors for the stack of boards. Next, we ensure that all subsystems of the satellite do not exceed their temperature limits through analytical and simulated thermal analysis, showing that during worst case hot (70∘ beta angle) and worst case cold (70∘ beta angle) orbits, no subsystem reaches within 5 ∘C of its operating temperature limits. Finally, we analyze the structure of BeaverCube 2 to validate that the components can structurally withstand the 4-7 G linear accelerations, 13.5 rad/s radial accelerations, 1200 N side rail loads, and random vibration environment that may be experienced during launch [1]. The design is shown to be robust in these conditions, with margins of safety of stress ranging from 19.97 to 37.56 and deformation of the stack of circuit boards not exceeding 0.05 mm. The minimum frequencies of modes of vibration throughout the structure occur at 623 Hz, which is well above the allowed minimum mode of 100 Hz.&#13;
&#13;
For the second topic of modeling the spherical control moment gyroscope, the key tasks were to design an actuation method using inductive drive and to experimentally validate a closed-loop controller for suspension of a prototype. For the actuation method, we present the electromagnetics modeling of an inductive spin drive, including analytical derivations of a bulk conductivity model and a skin current model. The analytical skin model shows that inductive drive with a rotating dipole magnetic field can generate a peak value 130 &#120583;Nm of torque. We simulate both models with a rotating dipole and a rotating quadrupole stator drive configuration. Next, we successfully magnetically levitate a permanent magnet rotor prototype. We develop an analytical plant model for the system and a controller for closed-loop suspension with 40 Hz crossover and 20∘ phase margin, then we present preliminary experimental results.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158321</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Modeling of a Catapulting Magnetic Transmission for Tuning Energy Storage and Release</title>
<link>https://hdl.handle.net/1721.1/158320</link>
<description>Design and Modeling of a Catapulting Magnetic Transmission for Tuning Energy Storage and Release
Thomas, Marcel Adam Craig
The purpose of this work is to generate design rules and models for a catapulting magnetic leadscrew transmission. These rules and models empower scientists and engineers with the ability to tune energy storage and release, and thereby increase the peak specific power (power/mass) of an actuator. This enables rapid design and development of lightweight (&lt; 0.5 kg), high peak power (&gt;200 W) actuators. This has the potential to impact powered exoskeletons and force-controlled robotics for rehabilitation and strength augmentation of explosive movements such as locomotion, jumping, and throwing. This thesis provided the following scientific contributions: (i) the concept of a catapulting magnetic screw actuator, (ii) experimentally validated models that are useful for the design and optimization of the magnetic leadscrew, considering both magnetic and structural aspects, (iii) experimentally validated models of the catapulting event in a magnetic leadscrew, and (iv) use of these models in the context of a practical application, namely powered exoskeletons that may reduce the metabolic cost of walking. First, the catapulting magnetic screw is introduced. An equation of motion is derived and experimentally validated. The equation of motion demonstrates that the potential wells in the magnetic screw create a ripple in the power as a function of time. Then, despite the equation of motion being a nonlinear differential equation with no closed-form solution, bounds on the ripple magnitude and frequency are derived. This gives the slip force and the lead needed to meet a specified tolerance on power as a function of time.&#13;
Then, a model is developed that enables rapid design of a magnetic screw that achieves a desired slip force. This model agrees with finite element analysis to within 10% error across varying each design parameter by multiple orders of magnitude. Then, given a magnetic screw, a structure is needed to be sufficiently stiff to keep the magnets from sticking together. Models of the magnetic stiffness matrix and structural stiffness matrix and simplifications thereof are given to ensure sufficient structural stiffness. Finally, the catapulting event may be too fast for a desired application, so it is shown how nonlinear springs may be used to meet requirements for powered exoskeletons that assist in walking.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158320</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Evaluation of a Powered Series-Elastic Cycloidal Ankle (CyAn) Prosthesis</title>
<link>https://hdl.handle.net/1721.1/158319</link>
<description>Design and Evaluation of a Powered Series-Elastic Cycloidal Ankle (CyAn) Prosthesis
Du, Lucy W.
The prevalence of major lower limb loss in the United States is projected to increase significantly due to rising rates of diabetes and obesity, highlighting an urgent need for advanced prosthetic solutions [1]. Individuals with lower limb amputations often face increased energy expenditure and secondary musculoskeletal conditions as a result of using conventional prosthetic devices [2]. These challenges underscore the necessity for innovative prosthetic designs that can enhance user mobility and comfort. A promising prosthesis solution are powered ankle-foot prostheses, which have the potential to provide biologically accurate push-off power, thereby offering significant benefits such as improved walking economy, increased mobility, and reduced impact forces on the user’s residual limb. However, existing powered prostheses often lack customization and fail to adequately meet the diverse and specific needs of individual users, which can limit their effectiveness and adoption. This thesis introduces a personalized, optimized, low-profile powered ankle-foot prosthesis, known as the Cycloidal Ankle (CyAn), designed to achieve biological ranges of motion and torque during level-ground walking. The CyAn employs a cycloidal drive transmission and a series carbon fiber spring to mimic tendon-like compliance, which enhances energy storage and return while maintaining a low build height to accommodate a broader range of users. The prosthesis device is capable of 25◦ of dorsiflexion and 41◦ of plantarflexion, and is capable of outputting at least 130 Nm of torque during walking, corresponding to biological ankle torque during level ground walking at 1.5 m/s for a 50th percentile male [3]. The CyAn prosthesis uses of a cycloidal drive transmission coupled with a series carbon fiber spring. This combination replicates tendon-like compliance and allows for a reduced build height without compromising the prosthesis’s range of motion or mechanical performance. The development of the CyAn prosthesis involved a comprehensive mechanical and mechatronic design process, encompassing modeling, optimization of electrical energy consumption, component selection, and benchtop and clinical evaluation. This thesis describes the detailed design and analysis of the CyAn prosthesis, including a parametric model for predicting device performance, fatigue life calculations, and mechanical integrity assessments of device components. Benchtop testing results confirm that the device successfully achieves the targeted performance metrics, demonstrating its capability to replicate natural gait mechanics. The clinical validation study was conducted with 3 participants with unilateral transtibial amputation at 3 different walking conditions: level ground at 1.5 m/s, uphill (+10◦ slope) at 0.8 m/s, and downhill (-10◦ slope) at 1.2 m/s. During the experiment, the subjects walked on an instrumented treadmill to regulate the walking speed while force and motion data were recorded. The results of these tests demonstrate the prosthesis design’s capability to replicate natural gait mechanics and kinetics, as well as insights into further improvements and adaptations. This thesis comprehensively details the mechanical and mechatronic design processes, encompassing modeling, optimization, component selection, and empirical evaluation of the CyAn prosthesis. This thesis presents the first of its kind rotary powered ankle-foot prosthesis, utilizing a cycloidal drive mechanism and a custom series carbon fiber spring. Compared to existing powered devices, the CyAn offers a lower device mass and increased biomimetic functionality, making it a cost-effective solution for improving mobility and quality of life for transtibial amputees. This research establishes a framework for developing customized prosthetic solutions that address the unique needs of individual users, with significant clinical results demonstrating the potential of the CyAn to improve health outcomes by normalizing biomechanics, increasing energy efficiency, and reducing adverse limb loading.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158319</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Screen Time</title>
<link>https://hdl.handle.net/1721.1/158318</link>
<description>Screen Time
Landman, Jeffrey
In Times Square, architecture is inextricable from mediated representations. The place is dislocated by the screens that envelop its buildings and the other screens, around the world, upon which its image is ceaselessly presented. The neighborhood itself is named after the Times Tower, which was opened in 1905 as the office and printing press of The New York Times, and remains at the center of the square today, entirely empty, voided by the advertising value of its screens. But this condition is not a contemporary anomaly. If the screens, flowed through by consumer desire, currently vaporise the building’s edge, in 1904, before it was even occupied, the building summoned the city with the results of the general election, broadcast to the metropolis via searchlight. The building has always extended its edge, projecting public messages while concealing private concerns.&#13;
&#13;
This thesis understands the building as one actor in a media apparatus: a network of interconnections between broadcasting devices and media, infrastructure, public and political events, development policy and financial systems. The Tower indexes 20th century architecture’s participation in this media apparatus, telling a story in which communication and the distribution of power predate and outlast inhabitation, a story in which occupation is not part of the program. The thesis tracks the tower through six innovative broadcasting devices which the building sponsored, including the world’s first moving electric sign, the New Year’s Eve Ball, the world’s first changeable architectural screen, and the world’s largest open architectural competition. &#13;
&#13;
The form of the thesis is a short movie that uses found footage and computer generated animations to apprehend the Tower amid its myriad images. In designing for animated representation the thesis is positioned in a lineage of paper architectures, proposing a form of architectural production which embraces and redirects the forces of the media apparatus. The movie reconfigures, misaligns and misuses its historical sources to reproduce and subvert the Screen Time from which architecture can now never be distinct.
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158318</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Product Purity Prediction and Anomaly Detection for an Automated Peptide Manufacturing Platform</title>
<link>https://hdl.handle.net/1721.1/158317</link>
<description>Product Purity Prediction and Anomaly Detection for an Automated Peptide Manufacturing Platform
Yang, Liudi
This thesis aims to develop and deploy a method of predicting product purity and automating anomaly detection for Mytide Therapeutics’ peptide manufacturing platform. A baseline study revealed how early purity prediction and anomaly reporting could decrease the production cycle time, manual data review, and chemical waste produced by the synthesis process. The most important tool for making purity predictions is UV absorption on the byproducts and excess reagents that come out of the reactor, where the peptides are made. A large part of this thesis was improving the quality of the UV data in order to make purity predictions using the improved UV traces. Sensor data from historical runs, including pressure, temperature, and flow rates, were analyzed to characterize several common anomalies. The reporting system takes in live data and alerts the relevant parties when the limits are reached, so that corrective action can be implemented quickly. The anomaly tracking code also generates a report to either be viewed on the user interface or stored in the backend database with the run’s historical data. Implementation of the described system improvements had several positive impacts on the workflow. The live anomaly alerts allowed for issues to be reported to the relevant parties upon occurrence, which increased the uptime of the system. The anomaly report, which is tagged to each peptide synthesis run, allows for historical data evaluation and easy decision-making for advancing the peptide to the next step of the process. The purity prediction allowed for earlier identification of certain poor-purity peptides by 27% of the production time. Together, these system improvements helped to advance the company’s peptide manufacturing platform towards total automated decision-making.
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158317</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Generalization of Models on Streets Imagery: Methods and Applications</title>
<link>https://hdl.handle.net/1721.1/158316</link>
<description>Towards Generalization of Models on Streets Imagery: Methods and Applications
Alhasoun, Fahad
The domains relevant to urban planning have been disrupted by the proliferation of highly&#13;
granular city data and the advancements in machine learning. However, machine learning models are susceptible to pitfalls constraining their deployment in many applications&#13;
including domains related to urban settings. There is much to be addressed between the&#13;
methods and applications before we can realize all potentials of machine learning to improve urban life. In this thesis, we focus on the use of streets imagery and classification&#13;
problems. We start motivating the thesis with a case study where deep learning models&#13;
are trained to predict street contexts (i.e. residential, park, commercial...etc) from streets&#13;
imagery. We then shift gears and discuss a novel unsupervised domain adaptation method&#13;
to address the drop in accuracy when models are tested outside the domain of the training&#13;
data (i.e. a model trained on San Francisco and tested in Boston). We further our discussion with a proof of concept of a framework to develop more generalized models starting&#13;
with a prototype of a system of streets imagery collection, labeling, and ending with how&#13;
we approach generalization by breaking the problem into smaller prediction tasks to aid in&#13;
more understanding of the interworking of the models.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158316</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Case studies in the modeling and control of continuous pharmaceutical manufacturing processes</title>
<link>https://hdl.handle.net/1721.1/158315</link>
<description>Case studies in the modeling and control of continuous pharmaceutical manufacturing processes
Maloney, Andrew John
The pharmaceutical industry employs a myriad of modalities; ranging from small molecules to&#13;
biologics such as peptides, monoclonal antibodies, bi-specific antibodies, and viral vectors. The&#13;
manufacturing of these products is as varied as the products themselves. Small molecules are&#13;
synthesized chemically; i.e. by a series of key chemical transformation, work-up, and recovery&#13;
steps. Larger molecules can be isolated from naturally occurring sources (i.e. humans, plants, or&#13;
other microorganisms), or produced via recombinant hosts such as Chinese hamster ovary (CHO),&#13;
Escherichia coli, or Saccharomyces cerevisiae, with some products requiring both a recombinant&#13;
host and transient transfection or infection with additional genetic material.&#13;
&#13;
Across these modalities, industry, regulatory agencies and academia are investigating technologies for improved quality, efficiency, capability, and consistency. Of these technologies, continuous manufacturing (CM) is of particular interest due to its ability to allow for reduced equipment sizing and footprint, improved environmental sustainability, and improved process control. This thesis supports the implementation of continuous pharmaceutical manufacturing through advanced modeling, simulation, and control as described in three independent case studies.&#13;
&#13;
The first work considers the development of a virtual plant for manufacturing of a small molecule&#13;
active pharmaceutical intermediate (API) through four chemical transformation, workup, and&#13;
recovery steps. The plant is used for uncertainty quantification, improved process design, and&#13;
novel process control strategy development. The second work considers the production of small,&#13;
globular proteins by the yeast Pichia pastoris. A model for copy number stability is developed and&#13;
validated using data in open literature and data generated at MIT. The third work concerns the&#13;
production of monoclonal antibodies (mAbs) using Chinese hamster ovary cells as a production&#13;
host. Hardware considerations, lower level regulatory controls, and advanced process modeling&#13;
and control for a heavily-instrumented mAb manufacturing testbed are discussed. Across this&#13;
thesis, the benefits of systems-level analysis in the continuous manufacturing of pharmaceuticals&#13;
is documented and demonstrated.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158315</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tunability of Electrospun Scaffolds for Tissue Engineering</title>
<link>https://hdl.handle.net/1721.1/158314</link>
<description>Tunability of Electrospun Scaffolds for Tissue Engineering
Chyr, Gloria Un
Electrospinning is a cheap and quick method of creating non-woven scaffolds for tissue regeneration and growth with the proper fiber diameter for cell adhesion. However, electrospun scaffolds lack large pores between fibers and result in a densely packed mesh in which cells can adhere only to the surface of the material. Control of scaffold fiber size and porosity is critical to ensure scaffolds have a fiber diameter appropriate for cell adhesion and a high-enough porosity to allow for cell migration through the material. This thesis aims to demonstrate the tunability and control of electrospun gelatin scaffolds to make them viable for use in tissue regeneration by altering grounded collector geometry and thus the electric field that nanofiber deposition follows. Previous electrospinning experiments show that processing parameters such as flow rate and voltage can affect fiber diameter and porosity, but are still insufficient in achieving dimensions viable for cell migration. Scaffold porosity is substantially more affected by the grounded collector geometry. By modifying collector geometry, pore size can be controlled without affecting fiber morphology and the deposition of gelatin nanofibers can be aligned or patterned to mimic natural tissue scaffolds. Introduction of a non-conductive, woven mesh in between the collector and source may allow further control of deposition patterns and thus scaffold construction. The path of electrospun fibers and the deposition patterns can be predicted by modeling the electric field.
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158314</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Searching for Mixed Octahedral-Tetrahedral Interstitial Hydrogen Occupation in Pd-Ti Sublattices: A Computational Study</title>
<link>https://hdl.handle.net/1721.1/158313</link>
<description>Searching for Mixed Octahedral-Tetrahedral Interstitial Hydrogen Occupation in Pd-Ti Sublattices: A Computational Study
Metcalf, Isaac
With hydrogen conversion and storage technologies promising a revolution in the energy industry if volumetric energy density is increased, the loading of hydrogen to high concentrations in metal lattices has become of special interest. Here we use Projector Augmented-wave density functional theory methods to search the Pd-Ti-H system for stable instances of mixed tetrahedral-octahedral site occupation. We compute the energies of 42 hydrides constructed from seven metal sublattices: Ni₃Ti-prototype Pd₃Ti, CdI₂-prototype PdTi₂, and FCC four-atom unit cells of Pd, Pd₃Ti, PdTi, PdTi₃, and Ti. Our results suggest that mixed octahedral-tetrahedral occupation is energetically unfavorable in most cases, but a Li₃Bi-prototype hydride may be stable within the Pd₁-ₓTiₓH₃ system.
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158313</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inside the App Bureaucracy: The Use of Smartphone Apps in Public Service Delivery Organizations in Pakistan</title>
<link>https://hdl.handle.net/1721.1/158312</link>
<description>Inside the App Bureaucracy: The Use of Smartphone Apps in Public Service Delivery Organizations in Pakistan
Masud, Mohammad Omar
Smartphone apps are being used in governments in developing countries for monitoring of frontline officials in delivery of public services. Development literature has expressed doubt about transformative impact of digital technologies on entrenched bureaucracies in developing countries. While smartphone monitoring apps have improved speed and reliability of information from the ground, we do not know how availability of such apps among large number of middle to low level officials affect work and practice in large government bureaucracies in a developing country. The dissertation looks at four in depth cases studies involving use of smartphone apps in Pakistan. The cases involve the anti-Dengue program in the city of Lahore, garbage collection agency in Lahore, crime mapping by Lahore police and school monitoring by the provincial school department in the province of Punjab (which includes Lahore). Using a detailed analytical framework, I trace out the evolution of the smartphone monitoring apps in each case starting from design and implementation and continuing to use of their data among multiple levels of the bureaucracy. Using Zuboff’s concept of informating and literature on accountability and performance in government organizations, I look at how design, implementation and use of smartphone monitoring apps and their data bring about changes in workflows and practices, among lower echelons of the bureaucracy without any major restructuring or reform, leading to greater responsiveness and performance orientation. The research reveals that low level officials are responsive to monitoring data because it gives salience to their work and is an objective performance measure in a challenging work environment. The research also shows that such behavior is contingent upon how effectively the organizations manage viewing and sharing of monitoring data with forums to discuss data with frontline officials. It also points out the importance of effectively managing a smart mobile data infrastructure to sustain emerging workflows and practices.
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158312</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-level Design, Fabrication, and Optimization of Sorbent-based Atmospheric Water Harvesting Devices</title>
<link>https://hdl.handle.net/1721.1/158311</link>
<description>System-level Design, Fabrication, and Optimization of Sorbent-based Atmospheric Water Harvesting Devices
Wilson, Chad T.
Sorption-based atmospheric water harvesting (SAWH) has been demonstrated as a promising avenue to addressing the increasing problem of water scarcity, especially in arid inland regions where alternative technologies are limited. However, current sorbent materials are often limited in their applicability due to system integration and device design constraints. In this thesis, we present advancement of atmospheric water harvesting technologies in both the passive and active design space by leveraging a system-level approach to modelling and optimization of devices. First, we discuss SAWH device fundamentals in terms of heat, mass, and fluid transport, and identify key components which impact device performance for both passive (solar) and active (electrical/chemical) systems, as quantified by our proposed performance metrics. Next, we develop a coupled heat and mass transport model of a passive, solar-driven atmospheric water harvesting device and quantify the impact of system variables on device operation. We use this model to fabricate an optimal system that efficiently utilizes a hydrogel-salt composite sorbent for record passive water production in the Atacama Desert. Furthermore, we propose an underlying mechanism for observed system-level degradation of our hydrogel-salt composite and demonstrate successful lifetime elongation of the sorbent in SAWH operation. Additionally, we use our fundamental understanding of SAWH to design an active device for portable use. Highly compact, lightweight, and energy dense, this system operates independent of external environment conditions and produces more than 2 L/day of potable water. Finally, a generalized topology optimization approach is proposed for sorbent scaffolding structures to further improve system water output while reducing power consumption and packing of atmospheric water harvesting devices.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158311</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Object-based SLAM</title>
<link>https://hdl.handle.net/1721.1/158310</link>
<description>Towards Object-based SLAM
Zhang, Yihao
Simultaneous localization and mapping (SLAM) is a fundamental capability for a robot to perceive its surrounding environment. The research area has developed for more than two decades from the original sparse landmark-based SLAM to dense SLAM, and now there is a demand for semantic understanding of the environment beyond pure geometric understanding. This thesis delves into object-based SLAM where the map consists of a set of objects with their semantic categories recognized and their poses and shapes estimated. Such a map provides vital object-level semantic and geometric perception to applications such as augmented reality (AR), mixed reality (MR), robot manipulation, and self-driving. In order to perform object-based SLAM, the sensor measurements have to undergo a series of processes. First, objects are semantically segmented in the sensor measurements. This step is typically done by a neural network. As robots are often required to bootstrap from some initial labeled datasets and adapt to different environments where labeled data are unavailable, it is important to enable semi-supervised learning to improve the robot’s performance with the unlabeled data collected by the robot itself. Second, after the objects are segmented, measurements for each object across different views have to be associated together for downstream processing. Lastly, the robot must be able to extract the object pose and shape information from the measurements without access to the detailed object CAD models which are commonly unavailable. This thesis studies these three aspects of object-based SLAM, namely semi-supervised learning of semantic segmentation in a robotics context, data association for object-based SLAM, and category-level object pose and shape estimation. The thesis closes with a discussion of how these components can be integrated into a full object-based SLAM system in the future.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158310</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion Aggregation, Correlated Ion Transport and the Double Layer in Super-Concentrated Electrolytes</title>
<link>https://hdl.handle.net/1721.1/158309</link>
<description>Ion Aggregation, Correlated Ion Transport and the Double Layer in Super-Concentrated Electrolytes
McEldrew, Michael
In the dilute regime, properties of electrolytes are well known and their mathematical descriptions are well established. The physical picture of dilute electrolytes, in which ions are pristinely solvated, fully dissociated, and immersed in an excess of structureless solvent medium, lends itself naturally to elegant and tidy mathematical descriptions. Owing in large part to their simplicity and physical transparency, these descriptions have guided our intuition of electrolytes for the better part of the last century. However, with the explosion of interest in super-concentrated electrolytes, particularly for electrochemical energy storage applications, theoretical descriptions of electrolytes within this regime are greatly needed. The physical description of superconcentrated electrolytes gets completely flipped from that of their dilute counterparts: ions have complex solvation structures, they are only partially dissociated, and they outweigh or even outnumber the solvent. This complex environment imparts unexpected properties to super-concentrated electrolytes. Understanding the origin of these unexpected properties could unlock the key design principles for the next generation of super-concentrated electrolytes. In this thesis, we develop simple, chemical-specific, theoretical models of superconcentrated electrolytes. First, we develop a continuum model of the electrical double layer in water-in-salt electrolytes that unravels the physics behind a potential mechanism for oxidative stability in WiSEs. We find that asymmetric ion solvation leads to very asymmetric water distributions within the double-layer. Next, we develop a thermodynamic model of ion aggregation and solvation in super-concentrated electrolytes. The model is deeply rooted in polymer-physics and treats the electrolyte as a poly-disperse mixture of branched ion clusters. In addition to cluster distributions and thermodynamics, our model predicts the onset of a percolating ion network, termed an ionic gel, at a critical salt concentration. We apply our model to two important classes of super-concentrated electrolytes: room temperature ionic liquids (RTILs) and water-in-salt electrolytes (WiSEs). For these classes, our model was able to be greatly simplified, as well as parameterized and validated by extensive molecular dynamics simulations. Furthermore, we consider the effects of extensive ion clustering and gelation on ion transport, electrochemical stability window and the emergence of nano-heterogeneity observed in super-concentrated electrolytes.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158309</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>“Biopolitics from below?” — Lessons of Emergent Urban Governance Trend Under Covid-19 in China</title>
<link>https://hdl.handle.net/1721.1/158308</link>
<description>“Biopolitics from below?” — Lessons of Emergent Urban Governance Trend Under Covid-19 in China
Shao, Yu
This thesis interrogates COVID-19 emergent urban governance trends in China in response to the COVID-19 crisis, with a particular focus on the use of the narratives of epidemic and state emergency, as well as the governance strategies during the pandemic and in the socalled post-COVID era. More importantly, this thesis intends to investigate people’s responses towards emergency policies—the compliances and creative strategies that people have adopted to demonstrate their resistance. Using a combination of ethnographic data and archival research, this thesis covers five major themes: a) the impacts that different outbreak narratives perpetuated on the Internet; b) left-wing scholars’ view (or hope) for the rise of socialism and how the Chinese state has used the socialist narrative to build up its international image; c) the strong comeback of capitalist practices the pandemic exacerbated the precariousness of work; d) how the pandemic has been used as a justification to impose panoptic surveillance and control on Chinese citizens and asked for absolute obedience towards government policies, as well as how the formulaic practices dominated the post-COVID landscape; and finally, e) people’s response and sentiments to government policies such as lockdowns and social distancing displayed on social media platforms. It concludes by arguing that even in an autocratic state with increasingly tightened control justified by the epidemic, people are not passive recipients of such policies. They have come up with creative strategies to express their resistance and exhibit negotiation with the policies. It further argues that in China, COVID-19 has aroused a new wave of active civil participation, for citizens to discuss politics openly, starting from pandemic related topics to the freedom of speech at large. Complicating what Panagiotis Sotiris terms biopolitics from below, it suggests that the creative posts on social media platforms are a savvy means of claiming back our bodies.
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158308</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiatively Cooled Magnetic Reconnection Experiments Driven by&#13;
Pulsed Power</title>
<link>https://hdl.handle.net/1721.1/158307</link>
<description>Radiatively Cooled Magnetic Reconnection Experiments Driven by&#13;
Pulsed Power
Datta, Rishabh
Magnetic reconnection is a ubiquitous process in astrophysical plasmas, responsible for the explosive conversion of magnetic energy into thermal and kinetic energy. In extreme astrophysical systems, such as black hole coronae and neutron star magnetospheres, radiative cooling modifies the energy partition by rapidly removing internal energy. In this thesis, we perform experimental and computational studies of magnetic reconnection in a radiatively cooled regime, previously unexplored in reconnection studies. The Magnetic Reconnection on Z (MARZ) experiments consist of a dual exploding wire array, driven by a 20 MA peak, 300ns rise time current generated by the Z pulsed-power machine (Sandia National Labs). The load generates oppositely-directed supersonic, super-Alfvénic, collisional plasma flows with anti-parallel magnetic fields, that generate a reconnection layer (Lundquist number SL ∼ 100), in which the total cooling rate far exceeds the Alfvénic transit rate [mathematical notation].&#13;
 &#13;
Two- and three-dimensional simulations of the MARZ experiments are performed in GORGON, an Eulerian resistive magnetohydrodynamic code. The simulations demonstrate the generation of a reconnection layer, which radiatively collapses, exhibiting a rapid fall in temperature, strong compression, and an increased reconnection rate consistent with theoretical predictions. The reconnection layer is unstable to the plasmoid instability, generating secondary current sheets separated by magnetic islands. High energy X-ray emission is generated predominantly by the plasmoids. The plasmoids also collapse radiatively, and the reconnection layer recovers a laminar large aspect ratio structure, which does not exhibit further plasmoid generation, indicating stabilization of the original plasmoid instability of the current sheet.&#13;
 &#13;
The experiments confirm numerical predictions by providing evidence of plasmoid formation and strong radiative cooling. Experimental diagnostics directly measure the spatial, temporal, and spectral properties of radiative emission from the reconnecting system. The reconnection layer generates a transient burst of &gt;1 keV X-ray emission, consistent with the formation and subsequent rapid cooling of the layer. Time-gated X-ray images show fast-moving (up to 50 km s−1) hotspots in the layer, consistent with the presence of plasmoids in 3-D resistive magnetohydrodynamic simulations. X-ray spectroscopy shows that these hotspots generate the majority of Al K-shell emission (around 1.6 keV), and exhibit temperatures (170 eV) much greater than that of the plasma inflows and the rest of the reconnection layer.&#13;
 &#13;
The findings in this thesis are of particular relevance to the generation of radiative emission from reconnection-driven astrophysical events, and to the global dynamics of reconnection in strongly cooled systems. The MARZ experiments also provide a novel platform for investigating radiative effects in high-energy-density and laboratory astrophysics experiments, and for validation of radiation magnetohydrodynamic and atomic spectroscopy codes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158307</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light-induced States and Phase Transitions in Quantum Materials investigated by Photoemission Spectroscopy and Epitaxial Synthesis</title>
<link>https://hdl.handle.net/1721.1/158272</link>
<description>Light-induced States and Phase Transitions in Quantum Materials investigated by Photoemission Spectroscopy and Epitaxial Synthesis
Choi, Dongsung
In condensed matter physics, a field of the study on phases of matter and their transitions, light-induced states and phase transitions have attracted significant attention due to their importance in both fundamental research and applications. This thesis will specifically delve into three compelling studies: (1) Floquet-Bloch states, photon-dressed Bloch states, were investigated in graphene. These states are generated by a time-periodic potential of light, closely related to the topic of Floquet engineering. (2) A light-induced insulator-to-metal transition was observed in Sr₂IrO₄, providing valuable insights into the fundamental characteristics of its ground states. (3) A light-induced topological phase transition (from a Z₂ topological insulator to a trivial insulator) was investigated in Bi-doped (Pb,Sn)Se thin-films. For these studies, we employed time- and angle-resolved photoemission spectroscopy (trARPES) and molecular beam epitaxy (MBE). Through in-depth investigation into these phenomena, this thesis seeks to contribute to the broader understanding of light-matter interactions in quantum materials.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158272</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Improve Clinical Decisions and AI Safety by Leveraging Structure</title>
<link>https://hdl.handle.net/1721.1/158271</link>
<description>Learning to Improve Clinical Decisions and AI Safety by Leveraging Structure
Chauhan, Geeticka
The availability of large collections of digitized healthcare data along with the increasing power of computation has allowed machine learning (ML) for healthcare to become one of the key applied research domains in ML. ML for health has great potential in providing clinical decision-making support that can improve quality of care and reduce healthcare spending by easing clinical operations. However, the successful development of ML models in healthcare is contingent on data that is complex, noisy, heterogeneous, limited in labels and highly sensitive. In this thesis, we leverage the unique structure present in medical data along with the availability of external knowledge to guide model predictions. Additionally, we develop differentially private (DP) training techniques using gradient structure to mitigate privacy leakage.&#13;
&#13;
In this thesis, we develop methods on different medical modalities such as multivariate physiological signals of ICU patients, patient discharge summaries, biomedical scientific articles, radiology reports, chest radiography imaging and spoken utterances. We tackle tasks such as forecasting patient states, relationship extraction, disease prediction, medical report generation and differentially private model training. We begin the thesis by offering open source data processing and modeling frameworks, move towards improved interpretability of model predictions to develop clinician trust and finally investigate differentially private ML techniques to protect user data. &#13;
&#13;
First, we show that the use of aggregated feature representations based on clinical knowledge offers model robustness against evolving hospital systems. Second, we leverage external knowledge in the form of clinical concept extraction to significantly improve relationship extraction. Third, we leverage the rich information from reports associated with chest radiographs to develop highly accurate disease severity prediction models using contrastive learning. Fourth, we showcase that the report generation task offers competitive disease prediction capabilities, label efficiency and improved interpretability. Finally, we introduce novel methods for improved privacy-utility-compute tradeoffs for DP pre-training of large speech models. We highlight DP as an important component of model safety, necessitating its development in conjunction with AI safety approaches that will be pertinent in healthcare and beyond.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158271</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Experiments and First-Principles Calculations to Understand and Engineer Metal Exsolution in Perovskites</title>
<link>https://hdl.handle.net/1721.1/158270</link>
<description>Combining Experiments and First-Principles Calculations to Understand and Engineer Metal Exsolution in Perovskites
O'Leary, Willis
Exsolution processing has emerged as a leading new route to fabricate highly active and stable ceramic-supported metal catalysts for a wide variety of applications, including solid oxide fuel cells, solid oxide electrolyzers, catalytic converters, and chemical/fuel production. In exsolution, metal cations are exsolved to the surface of a perovskite oxide solid solution under reducing conditions. The result is a perovskite backbone decorated with partially embedded metallic nanoparticles. The stability and anti-coking properties of exsolved nanoparticles have driven growing interest in exsolution materials. However, even after two decades of intense research, key open questions remain regarding exsolution's precise mechanism and, consequently, how to rationally engineer the properties of exsolution nanoparticles. This thesis aims to address these questions through a combination of experimental work and first-principles atomistic modelling with the long-term goal of accelerating the commercialization of exsolution materials.&#13;
&#13;
We first investigate the impact of perovskite composition on the properties of  Ni nanoparticles exsolved from bulk SrTi₀.₉₄Ni₀.₀₆O₃₋ subscript δ. We adjust the makeup of the Sr site, adding dopants of varying valence and ionic radii as well as vacancies, and measure how these changes modulate the surface density of the exsolved nanoparticles. We then use density functional theory (DFT) calculations to explain the observed trends, finding that the energetics of cation surface segregation and surface reduction control nanoparticle nucleation kinetics. This work provides valuable new insights into the exsolution mechanism, and, for the first time, introduces a quantitative model capable of accurately predicting the experimental exsolution properties of given perovskite composition from first principles. &#13;
&#13;
Next, we extend this quantitative model to capture the influence of the exsolution conditions on the properties of Ni nanoparticles, this time focusing solely on Ni exsolution from bulk  Sr₀.₈La₀.₁Ca₀.₁Ti₀.₉₄Ni₀.₀₆O₃₋ subscript δ. We first measure the dependence of nanoparticle density on exsolution temperature and oxygen partial pressure. We rationalize the empirical trends using the LaMer theory for nucleation and  our model previously developed to predict composition effects. This achievement points towards the first-ever method for first-principles prediction of a generic perovskite composition’s exsolution properties under varying reducing conditions. Thus, we make a major step towards fully in silico design of exsolution materials, greatly increasing their commercial attractiveness.  &#13;
&#13;
Finally, we develop a novel, highly efficient DFT methodology to predict Raman signatures of point defects and apply this methodology to interpret SrTi₀.₉₄Ni₀.₀₆O₃₋ subscript δ’s complex Raman spectrum. Based on empirical and DFT-derived Raman spectra, we characterize SrTi₀.₉₄Ni₀.₀₆O₃₋ subscript δ’s defect chemistry and local structure. Our findings are a vital first step towards using Raman spectroscopy to study and screen exsolution materials. More broadly, our computational methodology supercharges Raman spectroscopy as a tool to characterize local structure in a wide range of technologically relevant material systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158270</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding community changes in ecological systems: a probabilistic and geometric perspective</title>
<link>https://hdl.handle.net/1721.1/158269</link>
<description>Understanding community changes in ecological systems: a probabilistic and geometric perspective
Deng, Jie
Regulating and predicting community changes in ecological systems represent fundamental challenges in science and engineering, particularly in systems subject to constant environmental perturbations (e.g., natural, in vivo, and in situ environments). Consequently, a central goal of ecological research has been to understand the processes of coexistence, invasion, and assembly in open systems that underlie changes in the composition of ecological communities. These changes can be induced by either natural events (e.g., viruses infecting humans) or human actions (e.g., fecal microbiota transplantation). Although previous studies have theoretically explored criteria for successful coexistence, invasion, and assembly under specific or fixed environmental conditions, the variable and often unknown environmental conditions in nature have left these criteria largely untested.&#13;
&#13;
The overarching goal of my PhD thesis is to provide a testable theoretical framework for the dynamics of coexistence, invasion, and assembly under environmental uncertainty (i.e., in nature or open systems). This framework, rooted in the generalized Lotka-Volterra model, adopts a probabilistic and geometric perspective to understand these dynamics. In particular, my thesis comprises three core projects. The first project develops probabilistic system-level measures to quantify the effects of third-party species on the coexistence of a pair (or subset) of species by integrating population dynamics models (i.e., the Lotka-Volterra model) with in vivo experimental data from fruit fly gut microbiota. Additionally, I test general heuristic rules based on the proposed probabilistic measures using in vitro soil and in vivo gut microbial communities. The aim is to predict how non-resident species (invaders) can alter resident communities and to assess the applicability of our probabilistic measures. The second project seeks to unify coexistence and invasion theories within a geometric and probabilistic framework that is testable. This unification enables us to predict and test the impact of interspecific interactions on invasion and exclusion probabilities without requiring detailed model parameterization or extensive datasets. The third project identifies the general principle governing the development of ecological systems under environmental uncertainty, which could assist in regulating or even predicting changes in ecological community compositions. This principle is validated across a broad spectrum of ecological scales, from large mammals to gut microbes, through publicly available data. I believe this thesis will bring us closer to understanding the processes that influence community compositions and their changes, knowledge that holds great potential for advancing bio-conservation, bio-technologies, and bio-medicine.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158269</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applying Statistical Analysis and Machine Learning to Improve the Ice Sensing Algorithm</title>
<link>https://hdl.handle.net/1721.1/158268</link>
<description>Applying Statistical Analysis and Machine Learning to Improve the Ice Sensing Algorithm
Herron, Lucas A.
The detection of sea ice is a major problem faced by Argo floats operating in polar regions. In these areas, the presence of sea ice threatens to damage or destroy floats in the event of an impact at the surface. While methods have been proposed and implemented to combat this danger, the most successful of which is the Ice Sensing Algorithm (ISA), further work is necessary to fully mitigate the risks, particularly in the Arctic. In this analysis, past CTD profiles from the Arctic are compiled and matched with sea ice data to examine the performance of the ISA and recommend potential changes and new methods to further improve its accuracy. This is accomplished by fitting the data to statistical and machine learning models to predict the presence of ice and analyzing the results. Results show that both modifications to current methods and the inclusion of new variables may increase the predictive power of the ISA. Specifically, the analysis shows that the use of point measurements (as opposed to a metric over a pressure range) at the shallowest allowable depth provides the best performance. The additional inclusion of practical salinity and time of year as predictive variables also increases the performance of the algorithm. Results and statistics on the performance of the algorithm are provided and analyzed in various regions.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158268</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Robotic Manipulation in Remote Environments with Shared Autonomy</title>
<link>https://hdl.handle.net/1721.1/158267</link>
<description>Enabling Robotic Manipulation in Remote Environments with Shared Autonomy
Phung, Amy
The evolution of robotics technology continues to facilitate exploration and scientific study in remote environments, enabling research in areas that were previously impossible to reach. Robots operating in space and marine environments encounter similar operational challenges, as both face high operational costs, bandwidth-limited conditions, and natural, unstructured environments where dynamic obstacles might be present. Within the oceanographic domain, conventional deep-sea sampling operations involve remotely operated vehicles (ROVs) equipped with robotic manipulator arms to complete dexterous tasks at depth. While effective, deep-sea ROV operations require specialized instrumentation, highly trained shipboard personnel, and large oceanographic vessels, which make deep-sea samples inaccessible to most.&#13;
This thesis presents the SHared Autonomy for Remote Collaboration (SHARC) framework, and evaluates its utility within an oceanographic context. By leveraging shared autonomy, SHARC enables shore-side operators to collaboratively carry out underwater sampling and manipulation tasks, regardless of their prior manipulator operations experience. With SHARC, operators can conduct manipulation tasks using natural language and hand gestures through a virtual reality (VR) interface. The interface provides remote operators with a contextual 3D scene understanding that is updated according to bandwidth availability.&#13;
Evaluation of the SHARC framework through controlled lab experiments indicates that SHARC’s VR interface enables novice operators to complete manipulation tasks in framerate-limited conditions (i.e., &lt;0.5 frames per second) faster than expert pilots using the conventional topside controller. For both novice and expert users, the VR interface also increased the task completion rate and improved sampling precision. During sea trials, SHARC enabled collection of an underwater in-situ X-ray fluorescence (XRF) measurement at more than 1000 meters water depth in the Eastern Pacific with centimeter-level precision by remote scientists with no prior piloting experience. This demonstration provides compelling evidence of SHARC’s utility for conducting delicate operations in unstructured environments across bandwidth-limited communications, which holds relevance for improving operations in other sensitive domains where dexterity is required. SHARC’s ability to relax infrastructure requirements and engage novice shore-side users provides a promising avenue for democratizing access to deep-sea research.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158267</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cobordism of manifolds with w1, w2 and w4 vanishing.</title>
<link>https://hdl.handle.net/1721.1/158232</link>
<description>Cobordism of manifolds with w1, w2 and w4 vanishing.
Giambalvo, Vincent William.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1966; Bibliography: leaves 75-77.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158232</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of compensators for double integrator plants</title>
<link>https://hdl.handle.net/1721.1/158231</link>
<description>Comparison of compensators for double integrator plants
Schwartz, Adam L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1989; Includes bibliographical references (leaves 186-189).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158231</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contribution to the methods of measuring stresses below the surface</title>
<link>https://hdl.handle.net/1721.1/158230</link>
<description>Contribution to the methods of measuring stresses below the surface
Safoglu, Recep Ali,
            1920-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1947; Vita.; Includes bibliographical references (leaves 183-184).
</description>
<pubDate>Wed, 01 Jan 1947 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158230</guid>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local politics and industrial adjustment : the political economy of Italy in the 1980's</title>
<link>https://hdl.handle.net/1721.1/158229</link>
<description>Local politics and industrial adjustment : the political economy of Italy in the 1980's
Locke, Richard M.,
            1959-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1989; Title as it appears in the M.I.T. Graduate List, Feb. 1989: "Eppure Si Muove"--the political economy of industrial change in Italy.; Includes bibliographical references (leaves 285-310).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158229</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Issues in new product development--the introduction of tape automated bonding technology</title>
<link>https://hdl.handle.net/1721.1/158228</link>
<description>Issues in new product development--the introduction of tape automated bonding technology
Maggs, Virginia Loop.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1990; Includes bibliographical references (leaves 142-144).
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158228</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solvent extraction of cobalt from thiocyanate solutions</title>
<link>https://hdl.handle.net/1721.1/158227</link>
<description>Solvent extraction of cobalt from thiocyanate solutions
Hard, Robert A.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1957; Vita.; Bibliography: leaf 84.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158227</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sanitary design for Hopeworth Sanitarium</title>
<link>https://hdl.handle.net/1721.1/158226</link>
<description>Sanitary design for Hopeworth Sanitarium
Eli, Carl Stephens.; Babbitt, Harold E. 1888-1970.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1911
</description>
<pubDate>Sun, 01 Jan 1911 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158226</guid>
<dc:date>1911-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characteristics of electric strain gages at low temperatures</title>
<link>https://hdl.handle.net/1721.1/158225</link>
<description>Characteristics of electric strain gages at low temperatures
Sevand, Ali H.; Day, Emmett E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1946; Bibliography: leaf 21.
</description>
<pubDate>Tue, 01 Jan 1946 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158225</guid>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the electrification of the suburban zone of the New York, New Haven and Hartford Railroad centering in Boston</title>
<link>https://hdl.handle.net/1721.1/158224</link>
<description>A study of the electrification of the suburban zone of the New York, New Haven and Hartford Railroad centering in Boston
Bancker, Elbert H.; Bangratz, Ernest George.; Becker, James H. 1894-1970.; Farist, Charles J.; Moore, Irwin L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1920; Appendix contains numerous pamphlets.
</description>
<pubDate>Thu, 01 Jan 1920 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158224</guid>
<dc:date>1920-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of the reaction of sulfur vapor with a metallic oxide</title>
<link>https://hdl.handle.net/1721.1/158223</link>
<description>An investigation of the reaction of sulfur vapor with a metallic oxide
Hard, Robert A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1949; Bibliography: leaf 59.
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158223</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A neutron diffraction study of the field-induced diamagnetism in the semimetal bismuth.</title>
<link>https://hdl.handle.net/1721.1/158222</link>
<description>A neutron diffraction study of the field-induced diamagnetism in the semimetal bismuth.
Collins, Steven Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158222</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A compressible, high frequency numerical model of helicopter noise due to blade/vortex interaction</title>
<link>https://hdl.handle.net/1721.1/158221</link>
<description>A compressible, high frequency numerical model of helicopter noise due to blade/vortex interaction
Lima, Luiz Hamilton de Resende.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158221</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A neutron diffraction study of the magnetization in dilute Cu (Fe) alloys.</title>
<link>https://hdl.handle.net/1721.1/158220</link>
<description>A neutron diffraction study of the magnetization in dilute Cu (Fe) alloys.
Dickens, Michael Hugh.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1974; Vita.; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158220</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of production smoothing in a job shop environment</title>
<link>https://hdl.handle.net/1721.1/158219</link>
<description>A study of production smoothing in a job shop environment
Cruickshanks, Allan Benjamin.; Drescher, Robert D.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Vitae.; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158219</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of multi position letter sorting machine operation in the United States Postal Service</title>
<link>https://hdl.handle.net/1721.1/158218</link>
<description>A study of multi position letter sorting machine operation in the United States Postal Service
Cruce, A. C.,
            1858-1919.; Lee, Jerry Kenneth.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158218</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Initial investigation of the relation between the mode of MHD current transport and the near-electrode boundary layers</title>
<link>https://hdl.handle.net/1721.1/158217</link>
<description>Initial investigation of the relation between the mode of MHD current transport and the near-electrode boundary layers
Daentl, Wyatt S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158217</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Xenobiotic metabolism and mutation in diploid human lymphoblasts</title>
<link>https://hdl.handle.net/1721.1/158216</link>
<description>Xenobiotic metabolism and mutation in diploid human lymphoblasts
Crespi, Charles L.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1982; Bibliography: leaves 148-155.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158216</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A computer controlled fluid supply system</title>
<link>https://hdl.handle.net/1721.1/158215</link>
<description>A computer controlled fluid supply system
Curtis, Kent Wesley.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Bibliography: leaf 37.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158215</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constrained stochastic climate simulation</title>
<link>https://hdl.handle.net/1721.1/158214</link>
<description>Constrained stochastic climate simulation
Curtis, David Carleton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1982; Bibliography: leaves 215-226.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158214</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demand-responsive transit : problems and possibilities.</title>
<link>https://hdl.handle.net/1721.1/158213</link>
<description>Demand-responsive transit : problems and possibilities.
Ewing, Reid Harris.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 255-264.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158213</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetics of nitrogen solution from arc plasmas into liquid iron.</title>
<link>https://hdl.handle.net/1721.1/158212</link>
<description>Kinetics of nitrogen solution from arc plasmas into liquid iron.
Esimai, Charles Nduka.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158212</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of the air bleeds and several typical idling systems of the carburetors</title>
<link>https://hdl.handle.net/1721.1/158211</link>
<description>Analysis of the air bleeds and several typical idling systems of the carburetors
Ding, Qinghua.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1946; Bibliography: leaf 59.
</description>
<pubDate>Tue, 01 Jan 1946 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158211</guid>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoscale Origins of Thermal Transport Phenomena for Hybrid Layered Perovskites</title>
<link>https://hdl.handle.net/1721.1/158207</link>
<description>Nanoscale Origins of Thermal Transport Phenomena for Hybrid Layered Perovskites
Dahod, Nabeel S.
An exciting and fundamentally powerful modern methodology for materials development is the process by which artificial solids are rationally built piece-by-piece from nanoscale “building blocks”. Among the library of nanomaterials currently at the forefront of this pursuit, two-dimensional layered lead halide perovskites (2D LHPs), are of particular interest. These materials, solid crystals composed of alternating layers of atomically thin organic and inorganic subphases, possess novel optical and electronic properties that make them particularly suited for use in devices including solar cells, LEDs, flexible electronics, and even lasers. While significant early strides have been made in investigating charge carrier transport through dynamic models and sophisticated experiments, comparatively little attention has been given to understanding the manner in which the design of these nanostructured solids impacts their macroscopic thermal properties via thermal carrier (phonon) transport. This knowledge, however, is critical to addressing the thermal management constraints necessary to the design of reliable and stable devices. &#13;
To this end, this dissertation seeks to elucidate the thermal stability and fundamental pathways for heat transport within 2D LHP artificial solids. I first present an experimental investigation into the thermal and structural stability of these 2D LHPs near room temperature using differential scanning calorimetry and x-ray diffraction. This analysis reveals near-room temperature melting transitions isolated to the organic component of the nanomaterials. The existence of such an isolated phase transition indicates thematerials behave thermophysically as composites, a hypothesis that is supported by the effective use of a lever rule in estimation of the heat capacity of the materials. &#13;
I discuss the theoretical foundation and experimental construction of a frequency domain thermoreflectance technique to effectively measure the cross-plane thermal conductivity of 2D LHPs. This technique is then utilized to perform the first measurement of the thermal conductivity of 2D LHPs. This experimental study reveals that even in terms of their thermal transport pathways, 2D LHPs can be treated as composite materials. Specifically, lead bromide 2D LHPs exhibit structure-property relationships characteristic of ballistic phonon transport within isolated subphases and diffuse scattering at the organic-inorganic interfaces between layers. &#13;
Finally, I report the first measurements of the vibrational spectrum for 2D LHPs via low frequency Raman spectroscopy. This probe identifies the persistence of bulk-like phonons&#13;
even in the atomically thin 2D LHPs, in addition to identifying coherent acoustic phonons in lead iodide 2D LHPs potentially capable of carrying thermal energy across the organic-inorganic interfaces without scattering. Each of the observations made throughout this dissertation suggest the thermophysical representation of 2D LHPs as composite materials is a useful framework for understanding their thermal transport properties. That so many material properties can be effectively predicted simply from the bulk properties of the component phases is surprising given both the long-range order of the artificial solids and the sub-nanometer length scale of the individual component layers, and underlines the potential for intelligent engineering of the thermal properties of 2D LHPs without deleterious influences on the sterling optoelectronic properties.
</description>
<pubDate>Sat, 01 Jun 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158207</guid>
<dc:date>2019-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Double trouble: Predicting new variant counts across two heterogeneous populations</title>
<link>https://hdl.handle.net/1721.1/158206</link>
<description>Double trouble: Predicting new variant counts across two heterogeneous populations
Shen, Yunyi
Collecting genomics data across multiple heterogeneous populations (e.g., across different cancer types) has the potential to improve our understanding of disease. Despite sequencing advances, though, resources often remain a constraint when gathering data. So it would be useful for experimental design if experimenters with access to a pilot study could predict the number of new variants they might expect to find in a follow-up study: both the number of new variants shared between the populations and the total across the populations. While many authors have developed prediction methods for the single-population case, we show that these predictions can fare poorly across multiple populations that are heterogeneous. We prove that, surprisingly, a natural extension of a state-of-the-art single-population predictor to multiple populations fails for fundamental reasons. We provide the first predictor for the number of new shared variants and new total variants that can handle heterogeneity in multiple populations. We show that our proposed method works well empirically using real cancer and population genetics data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158206</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Influence of Root Geometry on Soil Cohesion and Anchoring Ability through Geologic Time</title>
<link>https://hdl.handle.net/1721.1/158205</link>
<description>The Influence of Root Geometry on Soil Cohesion and Anchoring Ability through Geologic Time
Colicci, Vittorio
Vegetation has become ubiquitous among most modern landscapes. However, for much of the Earth’s history, land plants were absent. Their rapid diversification throughout the Devonian and Carboniferous brought about a massive shift in geomorphology and landscape evolution. Complex rooting structures were the principal agents of change, mechanically reinforcing their substrates and generating cohesive sediments through weathering. This work examines the root systems of three major tree genera from these periods: Calamophyton, Lepidodendron, and Calamites. Simplified reconstructions were designed, 3D printed, and uprooted from a sand testbed to explore the effects of root geometry on anchoring ability. Force and displacement data were gathered for each model and used to calculate anchoring strength and uprooting work. Force laws were then derived to approximate the anchoring contributions of root weight, sediment weight, static friction, and shear strength. This analysis revealed a strong dependence on the span, surface area, and volume of the root system, which were used to normalize values across different geometries. The Calamophyton model required the greatest uprooting force per unit length, whereas the Lepidodendron model required the greatest uprooting force per unit area and volume. These results were interpreted within the environmental context of each genus alongside particular features of root geometry. Calamophyton contributed less to soil cohesion due to its simple unbranched architecture, however it likely increased wetland habitability for subsequent species. Meanwhile, Lepidodendron would have bolstered cohesion on account of its densely-packed dichotomous rootlets. Calamites is unique in its clonal reproductive habit and nodal branching architecture, which could have helped it colonize particularly unstable environments. We maintain that the earliest trees played a key role in surface stabilization within their ecosystems and likely paved the way for species that followed.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158205</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares</title>
<link>https://hdl.handle.net/1721.1/158204</link>
<description>One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares
Min, Youngjae
While deep neural networks are capable of achieving state-of-the-art performance in various domains, their training typically requires iterating for many passes over the dataset. However, due to computational and memory constraints and potential privacy concerns, storing and accessing all the data is impractical in many real-world scenarios where the data arrives in a stream. In this thesis, we investigate the problem of one-pass learning, in which a model is trained on sequentially arriving data without retraining on previous datapoints. Motivated by the increasing use of overparameterized models, we develop Orthogonal Recursive Fitting (ORFit), an algorithm for one-pass learning which seeks to perfectly fit every new datapoint while changing the parameters in a direction that causes the least change to the predictions on previous datapoints. By doing so, we bridge two seemingly distinct algorithms in adaptive filtering and machine learning, namely the recursive least-squares (RLS) algorithm and orthogonal gradient descent (OGD). Our algorithm uses the memory efficiently by exploiting the structure of the streaming data via an incremental principal component analysis (IPCA). Further, we show that, for overparameterized linear models, the parameter vector obtained by our algorithm is what stochastic gradient descent (SGD) would converge to in the standard multi-pass setting. Finally, we generalize the results to the nonlinear setting for highly overparameterized models, relevant for deep learning. Our experiments show the effectiveness of the proposed method compared to the baselines.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158204</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a Precision Needle for Injection of Fluid into the Suprachoroidal Space of the Eye for the Treatment of Retinal Detachment</title>
<link>https://hdl.handle.net/1721.1/158203</link>
<description>Design of a Precision Needle for Injection of Fluid into the Suprachoroidal Space of the Eye for the Treatment of Retinal Detachment
Rutherford, Emma
Rhegmatogenous retinal detachment (RRD) is a vision-threatening condition that affects 10 to 18 per 100,000 people in the United States annually [1]. The current standard for treatment is pars plana vitrectomy (PPV), which is an invasive and expensive surgical procedure that leaves patients unable to perform usual activities for four to six weeks. In addition, current methods tend to produce distorted vision upon recovery. In-office Suprachoroidal Viscopexy™ (SCVEXY™) is a minimally invasive technique recently developed by Dr. Rajeev Muni for treating rhegmatogenous retinal detachment (RRD) which has been performed on a handful of people [2]. This procedure has the potential to greatly reduce the cost and recovery time of RRD while also improving the quality of the repair. It can be performed with no incision, no tamponade agent, and no patient post-op positioning requirements [2]. SCVEXY works by injecting viscous fluid into the suprachoroidal space, a “potential space” between the sclera and choroid, creating a “bleb” of fluid underneath the tear that pushes the choroid towards the retina and allows it to reattach. However, difficulty in safely injecting into this space at the location of the retinal tear currently limits the widespread utilization of the technique. If this procedure was made reliably safe, it could greatly change how retinal detachments are treated and improve patient outcomes. The primary difficulty arises in precisely locating the suprachoroidal space in order to inject the viscous fluid. The thickness of the sclera varies from patient to patient and between locations on the eye. Additionally, the scleral and choroidal tissues are very thin, leaving little room for positional error. Hemorrhage may occur if the needle punctures through the choroid and into the subretinal space, which could lead to bad outcomes. This work presents a device developed to minimally invasively reach posterior segments of the eye, deploy an injection needle in-situ with high resolution, sense when the needle tip has passed into the suprachoroidal space (SCS), and inject a viscous fluid. Not only will this device be used to treat retinal detachment in a minimally invasive manner, but it could also be used for drug injection or fluid aspiration via the suprachoroidal and subretinal spaces for treatment of a variety of posterior ocular diseases.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158203</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep learning methods to study structurally heterogeneous macromolecules in vitro and in situ</title>
<link>https://hdl.handle.net/1721.1/158202</link>
<description>Deep learning methods to study structurally heterogeneous macromolecules in vitro and in situ
Powell, Barrett M.
Proteins, RNA, and other biomolecules form complex 3-D structures that dynamically interact to carry out essential biological processes. These macromolecular complexes are often structurally heterogeneous, which is key to executing or regulating their specific biological functions.&#13;
&#13;
To understand the molecular mechanisms underpinning these biological functions, structural biologists aim to determine the 3-D structure of the relevant macromolecule or macromolecular complex. Most such structural insights use techniques that strip the macromolecule of its cellular context (i.e., in vitro) and, subsequently, report a single average structure. However, recent advances in cryogenic electron microscopy (cryo-EM) provide avenues to determine sets of heterogeneous structures from a single dataset, and simultaneous advances in cryogenic electron tomography (cryo-ET) enable the resolution of macromolecules in their native cellular environment (i.e., in situ).&#13;
&#13;
This thesis describes the conceptualization, implementation, and application of tomoDRGN, a deep learning method developed to resolve structurally heterogeneous macromolecules in situ. TomoDRGN builds on the well characterized cryoDRGN method, which facilitates analysis of heterogeneous structures by cryo-EM, to cryo-ET, where I show it efficiently learns an ensemble of unique 3-D volumes from the structurally heterogeneous dataset provided. I additionally describe the application of TomoDRGN to datasets of diverse macromolecules, highlighting its ability to resolve conformational and compositional heterogeneity and to identify rare yet biologically informative structural states. This thesis also details an approach and protocol for rapid structural characterization of bacterial ribosomes in situ, wherein tomoDRGN facilitates powerful upstream dataset filtration. Finally, this thesis provides a detailed protocol for the characterization of heterogeneous cryo-EM datasets with cryoDRGN and, in doing so, illustrates the types of new insights enabled by the cryoDRGN and tomoDRGN Deep Reconstructing Generative Networks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158202</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organic chemistry laboratory</title>
<link>https://hdl.handle.net/1721.1/158121</link>
<description>Organic chemistry laboratory
Lindsay, William B.,
            1858-
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1881
</description>
<pubDate>Sat, 01 Jan 1881 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158121</guid>
<dc:date>1881-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Work in organic laboratory</title>
<link>https://hdl.handle.net/1721.1/158120</link>
<description>Work in organic laboratory
Stantial, Frank G.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158120</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Notes on the preparation of: alcohol absolute, ethyl bromide, ethyl iodide, acetyl chloride, ethyl amines, zinc ethyl, chloral, trichloracetic acid</title>
<link>https://hdl.handle.net/1721.1/158119</link>
<description>Notes on the preparation of: alcohol absolute, ethyl bromide, ethyl iodide, acetyl chloride, ethyl amines, zinc ethyl, chloral, trichloracetic acid
Macfarlane, Wm. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1879
</description>
<pubDate>Wed, 01 Jan 1879 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158119</guid>
<dc:date>1879-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on work done in the organic laboratory</title>
<link>https://hdl.handle.net/1721.1/158118</link>
<description>Report on work done in the organic laboratory
Allen, Walter S.
            (Walter Spooner)
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1879
</description>
<pubDate>Wed, 01 Jan 1879 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158118</guid>
<dc:date>1879-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report of work in organic laboratory</title>
<link>https://hdl.handle.net/1721.1/158117</link>
<description>Report of work in organic laboratory
Woolworth, J. G.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158117</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The wind resistance of ships</title>
<link>https://hdl.handle.net/1721.1/158116</link>
<description>The wind resistance of ships
Ober, Shatswell,
            1894-
Thesis: B.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1916
</description>
<pubDate>Sat, 01 Jan 1916 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158116</guid>
<dc:date>1916-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonrigid single-axis space integrator dynamics</title>
<link>https://hdl.handle.net/1721.1/158115</link>
<description>Nonrigid single-axis space integrator dynamics
Shaw, Edward Eugene.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1964; Includes bibliographical references (leaves 63-64).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158115</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The dynamic behavior of an ammonia synthesis reactor</title>
<link>https://hdl.handle.net/1721.1/158114</link>
<description>The dynamic behavior of an ammonia synthesis reactor
Eymery, Jean-Pierre.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 1964; Vita.; Includes bibliographical references (leaves 215-217).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158114</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stratospheric radiance</title>
<link>https://hdl.handle.net/1721.1/158113</link>
<description>Stratospheric radiance
Schweickart, Rusty.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaves 68-70).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158113</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A fast response instrument to directly read the ratio of two electrical signals</title>
<link>https://hdl.handle.net/1721.1/158112</link>
<description>A fast response instrument to directly read the ratio of two electrical signals
Shaw, Edward Eugene.
Thesis: B.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1960; Includes bibliographical references (leaf 26).
</description>
<pubDate>Fri, 01 Jan 1960 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158112</guid>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A comparison of the existing methods of studying the stability of earth slopes</title>
<link>https://hdl.handle.net/1721.1/158111</link>
<description>A comparison of the existing methods of studying the stability of earth slopes
La Casta-Sanchez, Salvador.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1959; Includes bibliographical references (leaf 15).
</description>
<pubDate>Thu, 01 Jan 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158111</guid>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Many meson production in meson-nucleon collision in the chew-low formalism</title>
<link>https://hdl.handle.net/1721.1/158110</link>
<description>Many meson production in meson-nucleon collision in the chew-low formalism
Tarimer, Niyazi.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1957; Vita.; Includes bibliographical references (leaf 60).
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158110</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformation processes of zirconium</title>
<link>https://hdl.handle.net/1721.1/158109</link>
<description>Deformation processes of zirconium
Rapperport, Eugene J.
            (Eugene John)
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1955; Vita.; Bibliography: leaves 99-101.
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158109</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A compilation of phenomenological methods to solve mesonic problems without a "meson theory"</title>
<link>https://hdl.handle.net/1721.1/158108</link>
<description>A compilation of phenomenological methods to solve mesonic problems without a "meson theory"
Tarimer, Niyazi.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1954
</description>
<pubDate>Fri, 01 Jan 1954 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158108</guid>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stree-rupture properties of beryllium and a beryllium-nickel alloy</title>
<link>https://hdl.handle.net/1721.1/158107</link>
<description>Stree-rupture properties of beryllium and a beryllium-nickel alloy
Rapperport, Eugene J.
            (Eugene John); Gelles, Stanley H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Metallurgy, 1952; Includes bibliographical references (leaf 22).
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158107</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of machining and tensile test data of low carbon steel.</title>
<link>https://hdl.handle.net/1721.1/158106</link>
<description>Comparison of machining and tensile test data of low carbon steel.
Sevand, Ali Hikmet.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1945; Bibliography: leaves 30-31.
</description>
<pubDate>Mon, 01 Jan 1945 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158106</guid>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pulsed laser ablation of calcified biological tissue : physical mechanisms and clinical applications</title>
<link>https://hdl.handle.net/1721.1/158105</link>
<description>Pulsed laser ablation of calcified biological tissue : physical mechanisms and clinical applications
Izatt, Joseph A.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1991; Includes bibliographical references (leaves 192-205).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158105</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>German nuclear dilemmas, 1955-1965</title>
<link>https://hdl.handle.net/1721.1/158104</link>
<description>German nuclear dilemmas, 1955-1965
Kelleher, Catherine McArdle.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1967; Vita.; Bibliography: leaves 665-686.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158104</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning models</title>
<link>https://hdl.handle.net/1721.1/158103</link>
<description>Planning models
Crooks, Lawrence.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Bibliography: leaves 121-127.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158103</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reorganization and documentation of a motor unit decomposition program</title>
<link>https://hdl.handle.net/1721.1/158102</link>
<description>Reorganization and documentation of a motor unit decomposition program
Creigh, John Lock.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1982; Bibliography: leaf 150.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158102</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An econometric/engineering model of United States demand for semi-fabricated copper products disaggregated by shape and end-use sector : and an econometric/engineering model of world demand for semi-fabricated copper products disaggregated by major consuming area</title>
<link>https://hdl.handle.net/1721.1/158101</link>
<description>An econometric/engineering model of United States demand for semi-fabricated copper products disaggregated by shape and end-use sector : and an econometric/engineering model of world demand for semi-fabricated copper products disaggregated by major consuming area
Cummings, Mary Rowena.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1982; Bibliography: leaves 174-177.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158101</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting torsional fatigue crack growth (Mode III) in turbo-generator shafts</title>
<link>https://hdl.handle.net/1721.1/158100</link>
<description>Predicting torsional fatigue crack growth (Mode III) in turbo-generator shafts
Nayeb-Hashemi, Hamid.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158100</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on work done in the organic laboratory</title>
<link>https://hdl.handle.net/1721.1/158099</link>
<description>Report on work done in the organic laboratory
Lund, James.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1881
</description>
<pubDate>Sat, 01 Jan 1881 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158099</guid>
<dc:date>1881-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Approaches for Understanding and Redesigning Enzyme Catalysis</title>
<link>https://hdl.handle.net/1721.1/158059</link>
<description>Computational Approaches for Understanding and Redesigning Enzyme Catalysis
Karvelis, Elijah
The remarkable specificity and catalytic efficiency of many enzymes make them attractive for applications ranging from therapeutics to chemical manufacturing. However, it remains challenging to identify the specific structural and dynamic mechanisms underlying the catalytic power of enzymes, which has limited our ability to re-engineer catalytic properties. In this thesis, I address these shortcomings by developing and demonstrating computational strategies comprised of techniques spanning statistical mechanics, machine learning, and protein design, and I apply them to the enzyme ketol-acid reductoisomerase (KARI), whose economic viability for the production of isobutanol would be strengthened by enhancing its activity on one of its two native substrates: 2-acetolactate (ACL). While computational enzyme redesign strategies for increased activity have traditionally focused on decreasing the energetic gap between the enzyme-substrate ground state and transition state, this thesis postulates and evaluates whether a more holistic treatment including the dynamics of complete turnover events could further elucidate properties affecting turnover efficiency and guide the identification of mutants with enhanced catalytic function.&#13;
&#13;
In the first study, we describe a novel redesign strategy for enhanced specific activity (turnover number) based on analysis of enzyme-substrate turnover dynamics. The approach combined statistical mechanical path sampling algorithms and machine learning methods to identify the structural characteristics of enzyme-substrate complexes primed for successful conversion of substrate to product, which were then energetically stabilized by mutating KARI. A subset of candidate mutants were tested using path sampling-based reaction rate constant calculations, and eight mutants were identified with computed improvements in turnover number of up to four orders of magnitude for the isomerization of ACL. Further analysis revealed structural mechanisms by which enhanced activity was attained. In the second study, we examine the effects of these same mutations on the isomerization of KARI's other native substrate: 2-aceto-2-hydroxybutyrate (AHB), and we find that the mutants selected for increased activity on ACL had varied levels of activity on AHB. These variations in mutant activity on AHB were explained by analysis of WT-AHB simulations, which showed that only some of the structural mechanisms related to enhanced ACL catalysis transferred to, and thereby facilitated, AHB catalysis. This thesis highlights the influence of conformational states that are visited during the dynamics of substrate turnover and their role on enzyme catalysis, and it furthermore suggests a framework with which researchers may consider and apply these effects when engineering catalytic function.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158059</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Microporous Polymers for Separations</title>
<link>https://hdl.handle.net/1721.1/158058</link>
<description>Designing Microporous Polymers for Separations
Storme, Kayla R.
In Chapter 1, we investigate the influence of side-chain length and dispersity in ring-opening metathesis polymerization (ROMP) polymers with pore-generating side chains. Macromonomers with four discrete monodispersities are separated and polymerized to produce bottlebrush polymers with monodisperse side chains. Each bottlebrush polymer is fabricated into a free-standing film. Pure-gas experiments are performed to explore the impact of dispersity and side chain length on gas separation performance. &#13;
&#13;
In Chapter 2, we evaluate the mixed-gas performance of a class of bottlebrush polymers described in Chapter 1. Gas sorption, diffusion, and CO₂-induced plasticization are reported. Competitive sorption effects are studied using 50:50 mixture of CO₂/CH₄. Separation performance at different compositions of CO₂/CH₄ is also explored. &#13;
&#13;
In Chapter 3, we incorporate nitrile functionality into the structure of a family of polymers with rigid, porogenic side chains described in Chapters 1 and 2. Statistical and block copolymers are synthesized to demonstrate the role of grafting density on separation performance and CO₂ plasticization resistance. Sorption experiments are performed to determine improvements to selectivity.&#13;
&#13;
In Chapter 4, we describe the optimized SN Ar synthesis of a poly(arylene ether) (PAE) that produces high molecular weight polymers. The synthesis of an analogous PAE with C-H functionality instead of C-F is also reported. Porosity and free volume are investigated in both PAEs. Separation performance is characterized and compared to other polymers with similar structural motifs.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158058</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Game Theoretic Approach to Resilient Space System Design</title>
<link>https://hdl.handle.net/1721.1/158057</link>
<description>A Game Theoretic Approach to Resilient Space System Design
Jones, Michael P.
There is a growing need for space missions that maintain performance in uncooperative or even adversarial environments. Space system designers must account for resilience to non-cooperative interactions in the design process while trading resilience and performance against cost. Prior academic work has considered resilience for system design, especially in the context of environmental factors; however, current literature does not include a space system design methodology that explicitly models non-cooperative, interactive threats to produce a more resilient design. To address this gap, a novel game-theoretic methodology is proposed to capture the interactive nature of non-cooperative systems at the strategic design level. The result is a two-player strategic design game in which the system under design and the threat system are both modeled as rational actors, and design options for the system architecture and threat system architecture are strategic choices for each actor. Performance, resilience, and cost metrics are calculated by an operational-level simulation of system-threat interactions. Tradespace and sensitivity analyses based on the results are used to evaluate the cost premium of adding resilience to the system and to demonstrate the strategic design choices that provide the most cost-effective means of increasing resilience to the modeled threats. The resulting methodology is presented through three case studies demonstrating the applicability of the methodology across multiple space mission applications.&#13;
&#13;
The first case study evaluates a low earth orbit (LEO) Satellite Communications (SATCOM) system design. The results show that perfect resilience (no drop in performance) to the modeled ground based jamming threat requires a 224\% cost increase and that additional satellites are a more cost effective means of increasing resilience than fewer, more capable satellites. The second case study focuses on a Global Navigation Satellite System (GNSS) and adds more fidelity to the physical model and the design choices available to both the threat and the system. A medium earth orbit (MEO) constellation that maximizes resilience to the modeled jamming and kinetic threats consists of 56 satellites in 7 planes, while in LEO this requires 819 satellites in 21 planes. For a LEO GNSS constellation to be more cost effective than a MEO GNSS constellation with the same level of resilience, the LEO system's first unit cost must be $\leq 1/10$ the MEO system's first unit cost. Cost based sensitivity analyses demonstrate how results are influenced by cost model estimates and show how program managers can use this methodology to guide program decisions as cost estimates improve over time. The third case study looks at a non-cooperative GEO mission through an abstracted two player game environment called GEO Patrol. This case study adds fidelity to the operational interactions by requiring complex decision making. Reinforcement learning is used with over 7 million games of self play to train a 2 hidden layer neural network to generate actions. Human-in-the-loop experiments verify the simulation results and improve understanding of the underlying system dynamics. Over 20 volunteers played 53 games representing 3 distinct scenarios. The average difference between the volunteer and simulation results is 5.1\%, verifying the simulation. The three case studies demonstrate how the methodology can be applied across disparate space missions with varying levels of model fidelity. Systems designers can apply the methodology to produce both quantitative and qualitative recommendations to ensure the final system is resilient by design.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158057</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Benefits of angularly-controlled field switching on the pulling-into-step ability of salient-pole synchronous motors</title>
<link>https://hdl.handle.net/1721.1/157987</link>
<description>Benefits of angularly-controlled field switching on the pulling-into-step ability of salient-pole synchronous motors
Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.
Thesis: Sc. D., Massachusetts Institute of Technology. Department of Electrical Engineering, 1931; Includes bibliographical references (leaves [73]-[75]).
</description>
<pubDate>Thu, 01 Jan 1931 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157987</guid>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A compilation of notes on steam pumps</title>
<link>https://hdl.handle.net/1721.1/157986</link>
<description>A compilation of notes on steam pumps
Sargent, F. T.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157986</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Treatment of vershire copper ore</title>
<link>https://hdl.handle.net/1721.1/157985</link>
<description>Treatment of vershire copper ore
Southworth, Harry C.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157985</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation and report on the Pomeroy Iron Works at West Stockbridge, Mass.</title>
<link>https://hdl.handle.net/1721.1/157984</link>
<description>An investigation and report on the Pomeroy Iron Works at West Stockbridge, Mass.
Schwarz, Theodore E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157984</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>West Boston Draw</title>
<link>https://hdl.handle.net/1721.1/157983</link>
<description>West Boston Draw
Rich, C. L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157983</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nitro-substitution products of benzole</title>
<link>https://hdl.handle.net/1721.1/157982</link>
<description>Nitro-substitution products of benzole
Briggs, Franklin H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1881; Includes bibliographical references (leaves 26-27).
</description>
<pubDate>Sat, 01 Jan 1881 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157982</guid>
<dc:date>1881-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report of work done in the organic chemical laboratory</title>
<link>https://hdl.handle.net/1721.1/157981</link>
<description>Report of work done in the organic chemical laboratory
Morgan, Frank H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157981</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A family of curves for the geometric mean distance of one rectangle from another</title>
<link>https://hdl.handle.net/1721.1/157980</link>
<description>A family of curves for the geometric mean distance of one rectangle from another
Uluant, Cemal Ali.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1944; Includes bibliographical references (leaves [25]-26).
</description>
<pubDate>Sat, 01 Jan 1944 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157980</guid>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leptonic and hadronic polarization in semi-leptonic inclusive weak and exclusive electromagnetic interactions</title>
<link>https://hdl.handle.net/1721.1/157979</link>
<description>Leptonic and hadronic polarization in semi-leptonic inclusive weak and exclusive electromagnetic interactions
Raskin, Alan Steven.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1987; Bibliography: leaves 220-223.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157979</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organization of the marginal band of avian erythrocytes</title>
<link>https://hdl.handle.net/1721.1/157978</link>
<description>Organization of the marginal band of avian erythrocytes
Swan, Judith Ann.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1985; Bibliography: leaves 170-182.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157978</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Levels, layers, and planes : the framework of a system of knowledge representation semantics</title>
<link>https://hdl.handle.net/1721.1/157977</link>
<description>Levels, layers, and planes : the framework of a system of knowledge representation semantics
Smith, Brian Cantwell.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 199-203.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157977</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A statistical approach to heavy-ion transfer reactions to the continuum</title>
<link>https://hdl.handle.net/1721.1/157976</link>
<description>A statistical approach to heavy-ion transfer reactions to the continuum
Karp, Joel Steven.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1980; Includes bibliographical references.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157976</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fresnoite examined by circularly polarized Raman scattering.</title>
<link>https://hdl.handle.net/1721.1/157975</link>
<description>Fresnoite examined by circularly polarized Raman scattering.
Chieu, Trieu Can.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157975</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and design of a rotary compressor</title>
<link>https://hdl.handle.net/1721.1/157974</link>
<description>Analysis and design of a rotary compressor
Cheimets, Peter Norman.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Bibliography: leaf 67.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157974</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decision making for energy conservation in existing commercial buildings.</title>
<link>https://hdl.handle.net/1721.1/157973</link>
<description>Decision making for energy conservation in existing commercial buildings.
Chertow, Richard Philip.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Includes bibliograhical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157973</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conical flow modeling for polygonal cross section bodies at off design conditions.</title>
<link>https://hdl.handle.net/1721.1/157972</link>
<description>Conical flow modeling for polygonal cross section bodies at off design conditions.
Kamkar, Hamid.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1977; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157972</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Riveted lattice bridge</title>
<link>https://hdl.handle.net/1721.1/157971</link>
<description>Riveted lattice bridge
Ritchie, James,
            1882-
            author.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1878; Manuscript.
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157971</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Proxy Records of Climate and Carbon Cycle Perturbations in the Paleozoic: Integrating Isotope Geochemistry and Sedimentology</title>
<link>https://hdl.handle.net/1721.1/157970</link>
<description>Multi-Proxy Records of Climate and Carbon Cycle Perturbations in the Paleozoic: Integrating Isotope Geochemistry and Sedimentology
Anderson, Noah
Carbonate rocks are a valuable archive of past environmental conditions. To glean robust information from this archive, we must understand how carbonate sediments form, ensure our analytical techniques are optimized, and consider how inherently local deposition of sediments can communicate information about global changes in climate. Chapter 1 proposes a new conceptual model for the formation of ooids that suggests that these small carbonate grains could form while buried in the shallow sediment pile during certain intervals of Earth history. Chapters 2 and 3 calibrate the clumped isotope paleothermometer for calcite, dolomite, and apatite, resolving significant discrepancies in calculated paleotemperatures. Chapter 4 applies clumped isotope thermometry to Early Mississippian strata and demonstrates a ~5ºC global cooling and substantial ice volume expansion coincident with a major perturbation to the global carbon cycle. Chapter 5 examines the extent to which diagenesis and facies- and phase-specific effects drive a major Early Mississippian carbon isotope excursion. In aggregate, this thesis outlines a roadmap for assessing changes to climate and the carbon cycle for carbonate rocks in the Paleozoic.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157970</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clinical Cost-Effectiveness as a Novel Metric for Steering Emerging Medical Technology</title>
<link>https://hdl.handle.net/1721.1/157969</link>
<description>Clinical Cost-Effectiveness as a Novel Metric for Steering Emerging Medical Technology
Richards, Daniel Herndon
Background: Steering an emerging medical technology involves making decisions under uncertainty. Localized drug delivery (LDD) is an emerging medical technology that may be useful in treating epilepsy, which is burdensome and difficult to clinically manage. Costeffectiveness analysis (CEA) is a model-based, problem-oriented framework for determining whether a treatment should be prescribed and reimbursed, though it is typically used to compare treatment alternatives that are already clinically available. Two research questions were posed: How can a clinical CEA be constructed for an emerging medical technology to enhance its steering? And, under what conditions would an emerging technology, LDD, be prescribed in place of resective surgery for drug-resistant epilepsy? Methods: A CEA was constructed with the clinical decision point defined as pediatric patients with drug-resistant epilepsy of focal origin. A new treatment alternative, LDD, was proposed as a solution-neutral, generalized concept, and technological factors were posited that influence parameters in the CEA. A one-way sensitivity analysis was conducted to verify the model and observe its most sensitive parameters. A probabilistic sensitivity analysis was conducted to observe P10 and P90 values for clinical effectiveness. Results: The most sensitive driver of incremental effectiveness of LDD over surgery was, per the model, the potential of LDD to reduce systemic side effects. The potential clinical benefit of LDD over surgery was estimated, probabilistically, as between P10 and P90 values of 0.081 and 0.339 QALYs, respectively. Limitations of the model were discussed. A ‘utopia point’ was calculated. The relationship of the CEA to a total addressable market (TAM) calculation was discussed. The CEA modeling process enhanced learning about the problem and solution spaces. Conclusions: Despite its limitations, CEA modeling can enhance steering activities for emerging medical technologies. Insights from CEA may also help to assess trade-offs in capabilities and cost, as well as observe trends in clinical performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157969</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence and the US-China Balance of Power</title>
<link>https://hdl.handle.net/1721.1/157968</link>
<description>Artificial Intelligence and the US-China Balance of Power
Chang, Benjamin Angel
How will artificial intelligence affect the US-China balance of power? While a nascent literature debates whether AI may upend strategic stability or revolutionize the nature of warfare, existing discussions suffer from both imprecise conceptualization and scarce data. In three essays, this dissertation evaluates the impact of AI on the nuclear balance, the conventional balance, and long-term US-China competition more generally by focusing on deep learning, generating data through simulation and supply chain analysis.&#13;
&#13;
The first essay defends the focus on deep learning, then presents an end-to-end conceptualization of how its technical qualities translate into usefulness across different categories of modern military tasks, which in turn affect, when contextualized to the particular dyad under study, the strategic balances across different domains of US-China competition. At each analytic layer, the paper condenses deep learning’s effects into several generalizations, tying AI to existing debates in security studies and setting an agenda for future research.&#13;
&#13;
The second essay simulates US-China nuclear war in Python to assess AI’s impact on the strategic balance, focusing on the tracking of mobile platforms on land. It finds that AI reduces the total “effective counterforce area” – the area the United States would have to destroy with nuclear weapons, to carry out a splendid first-strike – by one to two orders of magnitude. Under low to medium alert, the simulation finds this would enable successful US nuclear counterforce. While countermeasures are available to China, the essay predicts heightened nuclear tensions as a result.&#13;
&#13;
Finally, the third essay exploits supply chain datasets to assess each side’s ability to bring AI-enabled autonomous weapons to bear in future conventional conflicts. I find that control over the production of advanced AI chips by the United States and allies almost certainly means the United States would better exploit such weapons, if they emerged as decisive in modern warfare, within at least the next ten years. Potential Chinese policy responses, such as cannibalizing its civilian sector or substituting with older chips, would likely fail for technical reasons.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157968</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding the structural diversity of discrete polymers accessible through iterative exponential growth</title>
<link>https://hdl.handle.net/1721.1/157967</link>
<description>Expanding the structural diversity of discrete polymers accessible through iterative exponential growth
Khokhlov, Khrystofor
Iterative exponential growth is a powerful method for the synthesis of atomically defined macromolecules. However, preparation of enantiopure IEG-ready monomers can be challenging, which may limit the attractiveness of IEG as a tool for the study of structurerelationship properties in discrete macromolecules, both in materials and in biological systems. Here, we present a new strategy for the synthesis of orthogonally protected monomers, suitable for IEG through cycles of azidation, alkyne deprotection, and CuAAC, in fewer steps and from readily available and affordable building blocks. This monomer synthesis was achieved through the development of a novel allylation methodology. Using alkynylation of epichlorohydrin, LiBr Finkelstein, and TfOH-promoted allylation, we have been able to prepare a monomer for 3A (number of carbons in each polymer repeat unit, excluding alkyne) IEG in just three steps. Furthermore, the same reactions can be integrated in the synthesis of other IEG architectures (2A/4A/5A), thus expanding the structural diversity and readily accessible substrate scope for atomically defined macromolecules. The configurations of stereogenic centers in IEG-mer backbones are defined by the starting material (R or S epichlorohydrin) and can be further controlled by combining different stereoisomers in desired fashion. This work outlines a conceptual strategy to diversify and expand the chemical space of discrete macromolecules and enable efficient and quick access to a variety of IEG-mer scaffolds.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157967</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Picophytoplankton of the Northeast U.S. Shelf: Community Composition and Dynamics</title>
<link>https://hdl.handle.net/1721.1/157966</link>
<description>Picophytoplankton of the Northeast U.S. Shelf: Community Composition and Dynamics
Stevens, Bethany Lynn Fowler
Marine picophytoplankton are the most abundant primary producers in the ocean and are expected to be favored by the ongoing effects of climate change. Predicting the response of marine ecosystems to these changes requires mechanistic knowledge of picophytoplankton ecology. This thesis uses a combination of long-term monitoring, cruise data, population models, and high-throughput sequencing to investigate the dynamics of picophytoplankton across scales of space and time that are relevant both to the physiology of the individual cells and to the structure of the Northeast U.S. Shelf (NES), a productive and economically important coastal ecosystem. To identify the drivers of seasonal changes in picophytoplankton abundance, I first estimate daily division and loss rates for a nearshore community of picoeukaryotes over a 16-year period. I compare their cell concentrations, vital rates, and responses to environmental variables to those of the cyanobacteria, Synechococcus. Next, to reveal how these dynamics relate to changes in community composition, I analyze 9-years of monthly metabarcoding data and characterize taxonomic variability within the picoeukaryote assemblage. In the second half of this thesis, I explore spatial environmental variability and test the extent to which data from the nearshore observatory are representative of the picophytoplankton communities across the NES. I analyze flow-cytometry data collected from 22 regional research cruises, estimate daily Synechococcus and picoeukaryote division rates from underway data, and describe the distinct depth distributions of the two groups from subsurface samples. The major findings of this thesis are that, across the NES, the picoeukaryotes divide at much higher rates than the more abundant Synechococcus and are subject to greater top-down control from grazing or viral lysis. Both groups are light limited in the fall, temperature limited in the spring, and undergo earlier spring blooms in warmer offshore waters. For Synechococcus, the relationships between cell concentration, division rate and environmental parameters are consistent across the continental shelf, while the picoeukaryote community appears to be nutrient-limited farther from shore. Together, this work creates a detailed picture of the various controls on picophytoplankton abundance within a dynamic coastal ecosystem and advances our understanding of how picophytoplankton communities respond to environmental change.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157966</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein Folding, Host Cell Proteostasis, and Viral Evolution</title>
<link>https://hdl.handle.net/1721.1/157965</link>
<description>Protein Folding, Host Cell Proteostasis, and Viral Evolution
Yoon, Jimin
Pandemics and epidemics caused by pathological RNA viruses, such as the 1918 influenza pandemic, the global AIDS epidemic, and the recent coronavirus pandemic, impose a severe burden on global health and the economy. A major challenge associated with developing effective antiviral strategies is the exceptionally high mutation rate of RNA viruses, which endows them with a remarkable capacity to adapt to selection pressures such as antibodies or antiviral drugs. Hence, it is critical to understand the molecular-level factors that can constrain and potentiate viral evolution. While mutations benefit viruses by generating the diversity required for evolution, they also threaten viral viability because the majority of non-conservative amino acid substitutions cause protein folding defect. Mutations that result in substantial protein folding defects cannot be tolerated, regardless of how adaptively beneficial the resultant protein variant otherwise would be. Importantly, in cells, protein folding is assisted by intricate networks of chaperones and quality control factors, termed proteostasis networks. When a substitution on a protein impedes its proper folding, proteostasis network components can triage the defective protein variant to chaperones for folding assistance, or to quality control factors for timely degradation. Interestingly, virtually all RNA viruses rely on their host’s proteostasis network components for viral protein folding. It follows that the host’s proteostasis network could play a prominent role in defining the sequence space accessible to an evolving viral protein. In this thesis, I address how host proteostasis networks shape viral protein evolution. First, I describe how the composition of the host cell’s proteostasis machineries chapes the accessible sequence space of human immunodeficiency virus envelope protein. Second, I focus on an important immune-escape variant of influenza nucleoprotein whose fitness depends on host chaperones, and reveal the underlying molecular mechanism of the chaperon dependence. Finally, I demonstrate how chaperone machineries can determine the host range of viruses, and provide potential pathways viruses can evolve to overcome this selection pressure. Overall, elucidating how protein folding and host cell proteostasis affect viral protein evolution would substantially improve our ability to accurately predict RNA virus evolution and host-switching, and may enable design of host-targeted therapeutics that reduce the adaptability of RNA viruses.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157965</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Constraints on Melting Processes in the Earth and Small Rocky Bodies</title>
<link>https://hdl.handle.net/1721.1/157964</link>
<description>Experimental Constraints on Melting Processes in the Earth and Small Rocky Bodies
Hoyos Muñoz, Susana
The compositional and thermal evolution of rocky bodies in the Solar System is determined by the melt generation and crystallization processes in their interiors. This thesis investigates large-scale igneous processes in the Earth, the Moon, and the Angrite Parent Body using a multidisciplinary approach that integrates high-pressure experiments, geochemical analysis, and petrologic modeling. In Chapter 1, I examine the mantle source lithology of Hawaiian pre-shield tholeiitic volcanism through high-pressure equilibrium experiments and geochemical modeling. In Chapter 2, I define the crystallization sequence and petrogenesis of the young mare basalts collected by the Chang'e 5 mission and propose a model for melt generation in the Moon at ~ 2Ma. In Chapter 3, I estimate a minimum radius of ~1600 km for the Angrite Parent Body using near-liquidus equilibrium experiments. The implications for planet formation models of a differentiated moon-sized planetesimal accreting in the first 3 Ma of the Solar System are also discussed. In the fourth Chapter, I introduce a new experimental technique for studying melt migration in volcanoes, which allowed me to observe and describe the mechanisms that control melt migration in the upper crust. Together, these studies advance our knowledge of melt production and crystallization conditions in planetary interiors and provide fundamental insights into the geologic history of the Earth and other rocky bodies in the Solar System.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157964</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biochemical Analysis of Poly(ethylene terephthalate) Film Degradation Kinetics of Engineered IsPETase Variants</title>
<link>https://hdl.handle.net/1721.1/157963</link>
<description>Biochemical Analysis of Poly(ethylene terephthalate) Film Degradation Kinetics of Engineered IsPETase Variants
Zhong-Johnson, En Ze Linda
Plastic production and pollution have become a global crisis, with 79% of waste plastics landfilled in 2015 and only 12% recycled, demonstrating the need for rapid improvements in waste management and recycling technologies. Poly(ethylene terephthalate) (PET) is a major plastic polymer that is heavily investigated for enzymatic recycling. The presence of the ester bond in the polymer allows hydrolysis via serine esterases, such as cutinases and lipases. However, little is known about the surface reaction and how biochemical behavior might differ on a 2D solid surface compared to solution phase. Consequently, traditional solution-phase biochemical models, such as Michaelis-Menten, may not be directly applicable to kinetics of these enzymes, as the catalysis is occurring under a heterogeneous phase. To improve the fundamental understanding of the enzymatic reaction on the surface and derive an appropriate biochemical model for kinetic analysis, this thesis aims to develop a simple kinetic assay of PET biodegradation, identify mutations that positively impact product formation rates, and develop a novel biochemical model to analyze these mutations that fully describe the kinetic profiles observed for these enzymes. I developed a kinetic assay based on spectrophotometric measurements of UVabsorbance of the products in the reaction supernatant, as degradation products harboring the benzene ring will absorb between 240-280 nm. The method was found to be reliable to obtain relative measurements of initial reaction rates but cannot be used to determine the absolute concentration of products in the supernatant. I also developed a directed evolution assay of IsPETase using solid PET film substrates and found that mutation T116P improved maximum product accumulation by 30% based on kinetic studies and thermostability, while mutations S238N and S290P improved purification yield and thermostability. Finally, my collaborators and I found that the activity of IsPETase is impacted by surface crowding and developed a biochemical model to analyze the kinetic data of mutants. Based on the kinetic model, T116P reduced crowding susceptibility with no impact on activity, resulting in improved macroscopic degradation rates. In conclusion, crowding tendency may become a major property to be targeted for enzyme engineering to improve solid-substrate depolymerases for industrial applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157963</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cotton manufacture and the ring frame / by H.M. Silsbee.</title>
<link>https://hdl.handle.net/1721.1/157933</link>
<description>Cotton manufacture and the ring frame / by H.M. Silsbee.
Silsbee, H. M.,
            author.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1874; Manuscript.
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157933</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The results of an experiment on cylinder condensation, using steam superheated to a temperature of 590⁰F. with a pressure of 70 lbs. per sq. in. - the apparent cut-off at the front end being 4.38/20 ths. and at the crank end 4.18/20 / Thos. D. Plimpton.</title>
<link>https://hdl.handle.net/1721.1/157932</link>
<description>The results of an experiment on cylinder condensation, using steam superheated to a temperature of 590⁰F. with a pressure of 70 lbs. per sq. in. - the apparent cut-off at the front end being 4.38/20 ths. and at the crank end 4.18/20 / Thos. D. Plimpton.
Plimpton, Thos. D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157932</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A few notes on governors and their principles of action</title>
<link>https://hdl.handle.net/1721.1/157931</link>
<description>A few notes on governors and their principles of action
Lewis, Wilfred.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157931</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Locomotives</title>
<link>https://hdl.handle.net/1721.1/157930</link>
<description>Locomotives
Stanwood, J. B.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157930</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A discussion on the construction of gear teeth</title>
<link>https://hdl.handle.net/1721.1/157929</link>
<description>A discussion on the construction of gear teeth
Hibbard, Tom,
            1947-
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157929</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The expansion of saturated steam</title>
<link>https://hdl.handle.net/1721.1/157928</link>
<description>The expansion of saturated steam
Head, James H.,
            -1869.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157928</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The working of 5668 lbs. third grade and 1147 lbs. 1st grade argentiferous galena ore from the Merrimac Mine, Newburyport, Mass., including crushing, washing, roasting, and smelting of the ore and refining the products obtained</title>
<link>https://hdl.handle.net/1721.1/157927</link>
<description>The working of 5668 lbs. third grade and 1147 lbs. 1st grade argentiferous galena ore from the Merrimac Mine, Newburyport, Mass., including crushing, washing, roasting, and smelting of the ore and refining the products obtained
Towne, Linwood O.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157927</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The vershire copper ore and its metallurgical treatment</title>
<link>https://hdl.handle.net/1721.1/157926</link>
<description>The vershire copper ore and its metallurgical treatment
Bartol, George.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157926</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steel</title>
<link>https://hdl.handle.net/1721.1/157925</link>
<description>Steel
Hunt, Alfred E.
            (Alfred Ephraim),
            1855-1899.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157925</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Hancock Inspirator</title>
<link>https://hdl.handle.net/1721.1/157924</link>
<description>The Hancock Inspirator
Schwamb, Peter,
            1858-1928.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157924</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pumping engines</title>
<link>https://hdl.handle.net/1721.1/157923</link>
<description>Pumping engines
Kilham, A. C.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157923</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study of the alcohol thermometer at low temperatures</title>
<link>https://hdl.handle.net/1721.1/157922</link>
<description>Study of the alcohol thermometer at low temperatures
White, Anthony C.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1882
</description>
<pubDate>Sun, 01 Jan 1882 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157922</guid>
<dc:date>1882-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Columbus Avenue Bridge, Boston, Mass.</title>
<link>https://hdl.handle.net/1721.1/157921</link>
<description>On the Columbus Avenue Bridge, Boston, Mass.
Kebler, Julian A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157921</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Ashtabula Bridge</title>
<link>https://hdl.handle.net/1721.1/157920</link>
<description>The Ashtabula Bridge
Wiggin, Frank E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157920</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A wrought iron post - truss</title>
<link>https://hdl.handle.net/1721.1/157919</link>
<description>A wrought iron post - truss
Raeder, Henry.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157919</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Albany Street Bridge</title>
<link>https://hdl.handle.net/1721.1/157918</link>
<description>The Albany Street Bridge
Hodgdon, F. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157918</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Broadway Draw Bridge, Boston, Mass.</title>
<link>https://hdl.handle.net/1721.1/157917</link>
<description>The Broadway Draw Bridge, Boston, Mass.
Freeman, John R.,
            1950-
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157917</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Front Street Bridge in Worcester, Mass.</title>
<link>https://hdl.handle.net/1721.1/157916</link>
<description>The Front Street Bridge in Worcester, Mass.
Copeland, Fred K.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157916</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A review of the Buffalo Water Supply</title>
<link>https://hdl.handle.net/1721.1/157915</link>
<description>A review of the Buffalo Water Supply
Buttolph, H. T.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157915</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>West Chester Park Bridge over B. &amp; P. R.R. in Boston</title>
<link>https://hdl.handle.net/1721.1/157914</link>
<description>West Chester Park Bridge over B. &amp; P. R.R. in Boston
Breed, Joshua B. F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157914</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Merrimac River Bridge</title>
<link>https://hdl.handle.net/1721.1/157913</link>
<description>Merrimac River Bridge
Baldwin, Thomas W.
            (Thomas Williams),
            1849-
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157913</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A comparison of the different methods of determining carbon and graphite in cast irons and steels of volumetric methods for determining iron in irons steels and iron ores and notes on Meinickes process for determining sulphur and phosphorus in cast irons and steels</title>
<link>https://hdl.handle.net/1721.1/157912</link>
<description>A comparison of the different methods of determining carbon and graphite in cast irons and steels of volumetric methods for determining iron in irons steels and iron ores and notes on Meinickes process for determining sulphur and phosphorus in cast irons and steels
Pope, Thomas E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157912</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A report of work in the organic laboratory</title>
<link>https://hdl.handle.net/1721.1/157911</link>
<description>A report of work in the organic laboratory
Allbright, William B.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157911</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of the possibility of using low voltage A. C. on the third rail system entering the Pennsylvania terminal</title>
<link>https://hdl.handle.net/1721.1/157910</link>
<description>A study of the possibility of using low voltage A. C. on the third rail system entering the Pennsylvania terminal
Maschi, A. P.; Driscoll, J. J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1926
</description>
<pubDate>Fri, 01 Jan 1926 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157910</guid>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abrupt change of load on a synchronous machine</title>
<link>https://hdl.handle.net/1721.1/157909</link>
<description>Abrupt change of load on a synchronous machine
Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1927; Includes bibliographical references (leaves [102]-[103]).
</description>
<pubDate>Sat, 01 Jan 1927 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157909</guid>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Void formation in copper and selenium ion irradiated molybdenum.</title>
<link>https://hdl.handle.net/1721.1/157908</link>
<description>Void formation in copper and selenium ion irradiated molybdenum.
Chernock, Richard Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157908</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Consolidation circuit for an MHD channel.</title>
<link>https://hdl.handle.net/1721.1/157907</link>
<description>Consolidation circuit for an MHD channel.
Cheng, Rowley Lop Wah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157907</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A fast approximate solution to the electrical power generation rescheduling and load shedding problem</title>
<link>https://hdl.handle.net/1721.1/157906</link>
<description>A fast approximate solution to the electrical power generation rescheduling and load shedding problem
Chan, Sherman Man.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157906</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coal nitrogen conversion to NO [subscript x] during simultaneous oxidation and pyrolysis.</title>
<link>https://hdl.handle.net/1721.1/157905</link>
<description>Coal nitrogen conversion to NO [subscript x] during simultaneous oxidation and pyrolysis.
Cheng, Irene Teresa.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1978; Bibliography: leaves 127-129.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157905</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the spectrum of resonance fluorescence induced by a monochromatic field.</title>
<link>https://hdl.handle.net/1721.1/157904</link>
<description>Measurement of the spectrum of resonance fluorescence induced by a monochromatic field.
Wu, Frederick Yung-Fung.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157904</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on attention and creative thought</title>
<link>https://hdl.handle.net/1721.1/157884</link>
<description>Essays on attention and creative thought
Wang, Jocelyn Yuxing
In the mental life of an ordinary person, creative thoughts, as well as other non-rigid forms of thought, such as mind wandering, are both pervasive and important for our cognitive endeavors. The goal of my dissertation is to provide a theory of these non-rigid forms of thought by understanding some of the cognitive mechanisms that underlie them, as well as to understand how these underlying mechanisms contribute to our epistemic lives more generally in all kinds of reasoning. Chapter 1 (based on co-authored work with Azenet Lopez) begins with a puzzle that arises from research on mind wandering: since during mind wandering we plausibly prioritize the information relevant to the concurrent tasks less, why does mind wandering sometimes improve rather than impair concurrent task performance? I resolve the puzzle by rejecting the standard conception of attention, according to which the more focused one’s attention is, the better it is at improving task performance. I instead argue that certain tasks are better performed with a more diffuse rather than focused mode of attention. I offer a conception of "diffuse attention" that generalizes from external to internal forms of attention and conceptualize mind wandering as an instance of it. Chapter 2 turns to provide an account of creative thinking, which is closely related to mind wandering. I argue that previous accounts in philosophy about the generation of creative thought are incomplete due to overlooking the role of what I call “memory gists”. Memory gists are memory contents that represent more abstract or qualitative features that are extracted from the specific, surface level features in the memory representations that were initially encoded in memory. I argue that generating and using memory gists in memory search enables highly creative people to form connections between memory contents that are not usually associated with each other by revealing their commonalities shared in their gists. Moreover, I argue that different mechanisms underlie online and offline generation of memory gists: the former involves the mode of diffuse attention that I conceptualized in Chapter 1, while the latter involves memory consolidation during sleep or wakeful rests. The active role that memory plays in creative thinking raises some questions about how to conceptualize the function of memory in our epistemic lives more generally. I explore this topic further in Chapter 3, where I reject the traditional view in epistemology that memory merely functions to preserve previously acquired information, such as information acquired through perception. I argue instead that one of the functions of memory is to improve our understanding of what was represented in the contents that we previously acquired. This is possible thanks to the fact that during memory consolidation, our memory system further processes previously acquired information, and generates representations about relationships between different components of the subject under consideration. My work thus contributes to the ongoing project of understanding memory as an active process instead of merely performing the role of storing information, and highlights understanding as one of the epistemic values that memory generates.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157884</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Figuring the Middle Ground: A Search for Authorship in Perceiving China's COVID-19 Lockdowns</title>
<link>https://hdl.handle.net/1721.1/157883</link>
<description>Figuring the Middle Ground: A Search for Authorship in Perceiving China's COVID-19 Lockdowns
Zhang, San
Witnessing and attempting to comprehend China’s controversial response to COVID-19 over the past three years from a geographically distant yet culturally and emotionally intimate standpoint, I have grappled with multiple perspectives, sometimes as an insider, sometimes as an outsider, and most of the time as an impostor to both. As I continually query the incoherence of my positionality, I find myself in an obscure middle ground where my voice is filtered as inauthentic and unheeded. I ask myself: What should I do? What can I do?&#13;
&#13;
This project is an effort to give myself a voice in the process of figuring out the “middle ground”—a gradient of unsettled propositions stretching between cultural identities, negotiating with constructed collective memories, and discursively evolving over a three-year-long uncanny journey trying to perceive the COVID-19 lockdowns in China. By accepting the “middle ground” as a valid stance, I was able to devise a set of methods for navigating the complexity of materials gathered at various times and locations. In addition, utilizing architectural representation tools, I curated a collection of works that reproduce the research process and exhibit the processed information.&#13;
&#13;
This endeavor is not intended to rationalize pandemic control. Rather, it cultivates a ground for reflection that deconstructs a dichotomous perception of right or wrong, drawing attention to individual lived experiences that provide a nuanced interpretation of the COVID-19 pandemic as an international health emergency that affected everyone. Although somewhat fuzzy and uneasy, the “middle ground” position indicates the possibility that a personal desire to develop one’s authorship can lead to a means of making sense of a global crisis.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157883</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in Marma (မာရမာ)</title>
<link>https://hdl.handle.net/1721.1/157882</link>
<description>Topics in Marma (မာရမာ)
Marma, Rani Ukhengching (ဦး ချမ်း စိန် မာရမာ)
Marma¹ an endangered indigenous language of Bangladesh, is spoken by approximately 200,000 Marma individuals residing in Bangladesh’s southern region called the Chittagong Hill Tracts (CHT). Marma language is closely related to Rakhine and Burmese, and many lexical items are almost identical to those in Burmese and Rakhine, “although Marma exhibits a more conservative phonological profile than Burmese in the grammatical particles” Keisuke (2011). This research study analyzed several morphemes and their roles in shaping discourse structures in Marma information structure (topic-focus articulation). Marma has “agglutinative morphology”, meaning words are formed by stringing together morphemes in specific sequences. We observed prefixation, suffixation, and infixation in Marma. We analyzed the multifunctionality of these selective morphemes [“က=ga/ka, ကိ ု =go/ko, စာ=cha,ရာ=ra, ယည်=yi”] within Marma discourse and explored their implications for a better understanding of information structure in Marma language. At the end of this paper, through instrumental analysis, we proposed three tones in Marma (i.High and creaky, ii. low, and iii. falling).&#13;
 &#13;
Key words: Marma, indigenous language, information structure, topic and focus,morphology and tone.&#13;
&#13;
¹“According to Bradley (1985:180), the Marma group would have first migrated from Arakan to&#13;
the Chittagong Hill Tracts by the early sixteenth century and then after the Burmese conquest in&#13;
1785. They live mainly in the Chittagong Hill Tracts where they form one of the main Indigenous&#13;
groups ( Htin, 2015) ”
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157882</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints on vowel-zero alternations in Hungarian</title>
<link>https://hdl.handle.net/1721.1/157881</link>
<description>Constraints on vowel-zero alternations in Hungarian
Takács, Dóra Kata
I analyze a large set of Hungarian nominal stems whose last vowel alternates with zero in certain contexts (Vago (1980), Siptár &amp; Törkenczy (2000)): e.g. bokor [bokor], bokr-ok [bokr-ok]. I argue that the mechanism underlying these alternations is syncope, departing in this from earlier work (Vago (1980), Abondolo (1988), J. Jensen &amp; Stong-Jensen (1988, 1989), Törkenczy (1995), Abrusán (2005)) which assumes epenthesis or metathesis. My research focuses on which stems fall into this closed group of vowel-zero alternating stems. I show that there is an interaction between phonological processes that repair phonotactically illicit consonant clusters – like voicing assimilation, gemination, affrication – and vowel-zero alternations. I present a proposal relying on underspecification that correctly predicts that these phonological processes block vowel-zero alternations. The grammar that generates this result includes a ranking schema where the constraint triggering syncope (referred to below as Syncope) is outranked not only by the Markedness constraints that define illicit CC-clusters in Hungarian but also by the faithfulness constraints that are normally violated in the repair of such clusters. The general ranking I will argue for is: (1) Markedness (*CC for various CCs) » Faithfulness to Cs » Syncope » Max V I also present results from a nonce word experiment, which confirms that Hungarian speakers are aware of the systematic restrictions my analysis characterizes. The broad significance of the work is to document a large-scale conspiracy (Kisseberth (1970)) whereby permissible CC clusters emerge in at least two ways: through direct action of repair processes (assimilation or merger of two Cs into one) and through blockage of the syncope process that could yield the inputs to such repairs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157881</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Not Function but Function Conquered: Against a Functionalist Theory of Directives</title>
<link>https://hdl.handle.net/1721.1/157880</link>
<description>Not Function but Function Conquered: Against a Functionalist Theory of Directives
Hill, John
Ordering, requesting, and inviting are examples of directive speech acts. Philosophers have offered different accounts of what it is to perform a directive, which they have developed using different theoretical resources. Attitudinal theories of speech acts try to explain what it is to perform a directive in terms of a speaker’s beliefs, desires, and intentions. Nonattitudinal theories of speech acts try to explain directives in terms of something else.&#13;
&#13;
This thesis is concerned with functionalism, a nonattitudinal theory of speech acts. According to functionalism, performing a directive is making an utterance with the etiological function of causing hearers to act in response to one’s utterance. I argue that functionalism is false. I develop counterexamples that show functionalism is too permissive about the kinds of causation suitable for generating directives. I argue further that the most plausible way to address these counterexamples is to become more attitudinal: rather than be permissive, functionalism should hold that directives and hearers’ responses to them are caused by specific internal processes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157880</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Know Thy Cell-Free DNA: Early Detection of Microsatellite Instability Using Ultra-Low-Pass Cell-Free DNA Sequences</title>
<link>https://hdl.handle.net/1721.1/157879</link>
<description>Know Thy Cell-Free DNA: Early Detection of Microsatellite Instability Using Ultra-Low-Pass Cell-Free DNA Sequences
Lu, Nicole
Microsatellites are short segments of repeated DNA motifs (i.e., base pair patterns) that are widespread in our genomes. Microsatellites are inherently more mutable than other genomic locations, and since cancer cells undergo many more cell divisions, microsatellites are useful for distinguishing tumor DNA from normal (non-cancerous) DNA.&#13;
&#13;
Microsatellite instability (MSI) arises as a result of mismatch repair deficiency (MMRD), wherein a patient loses function of both copies of certain genes related to mismatch repair.&#13;
&#13;
Current MMRD diagnostics rely on deep sequencing of tumor tissue samples, which can be expensive and overly-invasive to perform for early or routine screening. Less expensive sequencing methods such as ultra-low pass (ULP) sequencing exist, but thus far have not been utilized for detection of microsatellite instability. In this thesis, we focus on 0.1× ULP sequences, in which about 10% of the genomic locations have one read in expectation. Having so few reads makes it difficult to differentiate experimental noise from true mutations. Similarly, cell-free DNA (cfDNA) are DNA fragments from cells all over the body, which circulate in the blood. Collecting and sequencing cfDNA is much less invasive than collecting tissue samples, but presents another challenge in that the fraction of DNA fragments from any particular cell (or group of cells) is low. Thus, if cancerous cells exist within the body, its representation in a given cfDNA sample is likely low. Together, these challenges present a obvious trade-off between signal strength and cost/invasiveness for screening and detection of MSI.&#13;
&#13;
This thesis focuses on the implementation, validation, and additional research of a computational tool to detect microsatellite instability in ultra-low pass cell-free DNA samples.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157879</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Co-optation of B Cell Developmental States in Malignancy and Autoimmunity</title>
<link>https://hdl.handle.net/1721.1/157878</link>
<description>Co-optation of B Cell Developmental States in Malignancy and Autoimmunity
Ramseier, Michelle L.
Transcriptional states provide a useful lens for understanding the diversity of cell identity and function. Cell state is regulated through both cell-intrinsic and -extrinsic mechanisms, and can diversify through tightly regulated transitions. However, perturbations to these regulatory mechanisms facilitate shifts in cell phenotypes and function that, gone unchecked, can disrupt homeostasis and drive disease. The ability of dysregulated transcriptional states to integratively represent underlying intrinsic and extrinsic drivers of aberrant cell survival and function nominates their potential as prognostic and therapeutic targets in disease.&#13;
&#13;
Here, we establish the therapeutic significance of cell state through the lens of pathologies that dysregulate B cell development and maturation. B cells develop and mature through tightly regulated cell-intrinsic and -extrinsic transcriptional state transitions restricted by stage-specific survival, proliferative, and apoptotic dependencies. We thus utilize B cell development and maturation as model systems to study how perturbations to cell-intrinsic and -extrinsic regulators can result in the pathologic emergence of aberrant developmental states enabling dysregulated survival and proliferation. In each chapter, we apply single-cell RNA-sequencing to define heterogeneous B cell developmental states in malignancy and autoimmunity, and uncover underlying signaling perturbations linked to their dysfunctional transcriptional regulation. We consider how these aberrant developmental states are driven by mutational perturbations to dysregulated cell-intrinsic signaling in BCR-ABL1 B cell acute lymphoblastic leukemia (B-ALL), cell-extrinsic signaling in CTLA4-deficient T cell-mediated follicular B cell blocks, and integrative mutational and niche-specific survival in mantle cell lymphoma (MCL). Finally, we identify how these aberrant developmental states shift or resolve upon targeted therapeutic intervention in each disease context. Collectively, this work demonstrates how cell states are intrinsically and extrinsically regulated, inform aberrant survival in disease, and demonstrate promise as therapeutic targets.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157878</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Germanium on Silicon Integrated Photonics for the Mid-Wave Infrared</title>
<link>https://hdl.handle.net/1721.1/157877</link>
<description>Germanium on Silicon Integrated Photonics for the Mid-Wave Infrared
Morgan, Rachel E.
This thesis presents the development of a Germanium-on-Silicon (GOS) integrated photonics platform for the mid-wave infrared (MWIR) wavelength range. Integrated photonics applies nanofabrication approaches with optical materials in order to miniaturize and improve the robustness of optical systems. Most integrated photonics development occurs in the near-infrared for telecommunications applications, but there is increasing interest in expanding the technology to other wavelength ranges. The MWIR wavelength range has applications in environmental sensing, industrial monitoring, and communications. This work develops low-loss waveguides, a passive component library, integrated modulators, laser integration design, and systems analysis&#13;
for the 2-5 µm wavelength range.&#13;
&#13;
Low-loss waveguides are demonstrated with losses of 0.6-2.5 dB/cm without top cladding. A detailed study of top cladding materials is conducted including niobium pentoxide, hafnium dioxide, epitaxial silicon, and others. Of these materials, niobium pentoxide offers the best performance with measured losses as low as 3.49±0.3 dB/cm.&#13;
&#13;
A passive component library is designed based on waveguides with and without top cladding, developing building-block components such as couplers, splitters, ring resonator filters, and Mach-Zehneder interferometers. Air-clad ring resonators demonstrate narrow-bandwidth filtering with a recorded extinction ratio of &gt;20 dB, full-width half max (FWHM) of 0.7 GHz, and unloaded Q factor of &gt;190,000.&#13;
&#13;
Integrated phase shifters are designed based on the plasma-dispersion effect and the thermo-optic effect. A plasma dispersion effect modulator is designed for forward-bias operation at 4.6 µm wavelength with predicted half-wave voltage (V subscript &#120587;) of 0.5 V, length (L) of 525 µm, voltage-length product (V subscript &#120587;L) of 0.027 V·cm, and speed of 58.4 MHz. A reverse-bias plasma dispersion effect modulator at 4.6 µm wavelength is designed with predicted V subscript &#120587; of 4 V, L of 3 mm, V subscript &#120587; L of 1.24 V·cm, and speed of 3.2 GHz. Thermal phase shifters fabricated out of 400 µm long metal wires have a predicted power required for a 2&#120587; phase shift (P₂ subscript &#120587;) of 410 mW without top cladding and a P₂ subscript &#120587; of 100 mW with Nb₂O₅ cladding.&#13;
&#13;
Designs for flip-chip integration of quantum-cascade lasers (QCLs) are presented. Coupling between the QCL and the GOS waveguides is simulated and input tapers are designed. Test structures for QCL-waveguide coupling and external cavity QCL demonstration are designed. The predicted coupling from the QCL into a GOS airclad waveguide is 10-30%.&#13;
&#13;
The components designed in this work can be combined to develop photonic integrated circuit (PIC) designs for applications in the MWIR wavelength range. To demonstrate this, design analysis for a gas-sensing lidar transmitter and a MWIR spectrometer are presented. The gas-sensing lidar is predicted to provide a sensitivity to N₂O of &lt;1% with a much smaller mass compared to existing free-space optics satellite lidar transmitter designs. The MWIR spectrometer has a predicted spectral resolution of 0.2 nm.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157877</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acoustically Controlled Remotely Operated Undersea Vehicles: A Quantitative Analysis</title>
<link>https://hdl.handle.net/1721.1/157876</link>
<description>Acoustically Controlled Remotely Operated Undersea Vehicles: A Quantitative Analysis
Stites, Corwin Wesley
This thesis topic stems from a U.S. Navy effort to alter an existing remotely operated vehicle (ROV) system. A vehicle reliant on a tethered connection to an operator requires adaptation to an untethered acoustically controlled vehicle. This project provides a tradespace exploration based in simulation of factors which limit untethered ROV performance. Factors which promote the use of an untethered system over a tethered system are also explored. A MATLAB simulation has been constructed to analyze a hypothetical ROV grid search mission across multiple parameters relating to the vehicle specifications, the mission layout, the acoustic communication system, and the operating environment. This simulation can then be used to generate a wide range of data regarding ROV performance by use of the Monte Carlo method. The performance metrics output by the simulation, along with an automated analytical tool created to process simulation data, provide quantitative insight into the viability of an ROV utilizing an acoustic communication system across a variety of scenarios.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157876</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Site-specific chemical and topological modifications to augment mRNA therapeutic potential</title>
<link>https://hdl.handle.net/1721.1/157875</link>
<description>Site-specific chemical and topological modifications to augment mRNA therapeutic potential
Aditham, Abhishek J.
Synthetic mRNA has emerged as a promising therapeutic platform for the treatment of a wide variety of diseases. Despite clinical demonstrations of mRNA for SARS-CoV-2 vaccines, mRNAs remain limited in application by their susceptibility to nucleases and overall short expression lifetime in vivo. We investigated the site-specific installation of chemical and topological modifications into therapeutic mRNA to augment their expression in cell cultures and mouse models. We began by developing messenger oligonucleotide-conjugated RNAs&#13;
(mocRNAs), which are mRNAs ligated to modified oligonucleotides that contain 3’ nuclease-resistant modifications. We show that mocRNAs are subject to slower deadenylation and enhance therapeutic protein expression in cell lines and primary cell cultures. We expanded on this technology by creating mRNAs with chemically branched poly(A) tails, or multitail mRNAs, which increase the density of modifications at the 3’ end of mocRNA and further stabilize mRNA against deadenylation.&#13;
&#13;
In conjunction with increased nuclease resistance at the 3’ terminus, we developed a strategy to enhance translation initiation on circular mRNAs (circRNAs). We developed QRNAs, which are circRNAs that possess an unnaturally-linked inverted 7-methylguanosine (m7G) cap. QRNAs substantially outperform conventional circRNAs, given the low translation initiation efficiency of IRES compared to cap-dependent initiation. Ultimately, our studies exploring the chemical and topological space of mRNA demonstrates the value of site-specific chemical and topological modifications for designing future generations of designer mRNA-based therapeutics.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157875</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision measurement of the W boson mass with the CMS Experiment in pp collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157874</link>
<description>Precision measurement of the W boson mass with the CMS Experiment in pp collisions at √s = 13 TeV
Yang, Tianyu Justin
The mass of the W boson, mW, is an important fundamental constant of nature, which&#13;
is also potentially sensitive to a plethora of physics beyond the Standard Model. In this&#13;
thesis, we discuss the precision measurement of mW with the CMS detector at the LHC in&#13;
proton-proton collisions at √s = 13 TeV. The phenomenology of W bosons produced in pp&#13;
collisions, the CMS detector characteristics, and other relevant factors are examined to justify&#13;
the overall strategy to measure mW from the muon transverse momentum and pseudorapidity&#13;
spectrum [formula] in the W → µν channel with a part of the 2016 data corresponding&#13;
to an integrated luminosity of 16.8 fb⁻¹. Dedicated studies aiming to reduce systematic&#13;
uncertainties related to the muon transverse momentum calibration, the muon reconstruction&#13;
and background rejection efficiencies, and the modeling of the W boson production and decay&#13;
kinematics are presented. A profiled maximum-likelihood fit of MC templates to observed&#13;
data incorporating over 4,000 nuisance parameters is employed to extract the central value&#13;
and the total uncertainty on m_W. The result of this measurement is m_W = 80, 360.2 ± 2.4 (stat.) ± 9.6 (syst.) MeV,= 80, 360.2 ± 9.9 MeV, which is consistent with the standard model prediction m(SM/W) = 80, 354.5 ± 5.7 MeV.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157874</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Just Doing My Job: Normative Dimensions of Social Roles</title>
<link>https://hdl.handle.net/1721.1/157873</link>
<description>Just Doing My Job: Normative Dimensions of Social Roles
Wells, Eliza
“What should I do?” Often, our answers make reference to our social roles: we ask what we should do as lawyers, citizens, or parents. But this confronts us with problems. Consider a would-be whistleblower, a wife challenging the gendered division of household labor, or the conflicted police officer Javert from Les Misérables. These agents feel there is a genuine conflict between morality and the norms of their role. While many philosophers treat social roles as incidental to our moral lives, this dissertation aims to do justice to this experience of roles’ normative force. I argue that doing so prompts revision to orthodox views of role-occupants’ reasons for action, blameworthiness, and responsibility for structural injustice. In Chapter One, I develop a new account of how social roles generate normative reasons for occupants to comply with role norms. I argue that agents’ reasons to comply with their role norms depend on how those norms contribute to functioning social practices. In addition to its claims about the structure of normative reasons, my view delivers a striking upshot in cases of conflict. While popular accounts of role normativity often maintain that moral considerations can cancel roles’ normative force, my project suggests a radically different conclusion: role-occupants have good reasons to comply even with norms that result in conflicts with what they morally ought to do. Social roles generate genuine normative conflicts. While many role-occupants find conflicts between roles and morality distressing, many others seem not to notice that there is a conflict at all. Consider the oft-maligned excuse: “I was just doing my job.” Chapter Two defends an epistemic variant of this excuse. I argue that agents who comply with roles’ deliberative norms may—for good reason—bracket morally relevant considerations. As a result, they may be non-culpably ignorant of wrongdoing. On some views, this can excuse them from blame. But even denying that moral ignorance exculpates is compatible with accepting role-occupants’ excuses. Such views often emphasize being motivated by the right reasons. But because role compliance is often justifiable, ignorance need not be blameworthy indifference to the right reasons. The upshot is a novel position in the debate about moral ignorance as an excuse. We might worry that this unduly lets role-occupants off the hook. If, as I argue, roleoccupants can have good reasons for acting immorally, and they can sometimes be blameless even when they do act wrongly, does that prevent us from taking those wrongs seriously? I grapple with this problem in Chapter Three. Drawing on theories of structural injustice, I argue that roles’ normative character actually generates responsibilities for justice. Because role performance both affirms and instantiates unjust structures, role-occupants bear responsibilities that can only be discharged by changing what their actions mean and do. This vindicates the widespread but philosophically puzzling view that agents ought to direct efforts towards injustices they participate in intimately, even when they could make a greater impact elsewhere. It also means that role-occupants are not off the moral hook. Ultimately, we each bear responsibility to create a more just social world.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157873</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Amplifying signals in the tumor microenvironment&#13;
for drug development and diagnostics</title>
<link>https://hdl.handle.net/1721.1/157872</link>
<description>Amplifying signals in the tumor microenvironment&#13;
for drug development and diagnostics
Martin Alonso, Maria Carmen
The advent of molecular biology and next-generation sequencing has significantly transformed our understanding of cancer and the delivery of cancer care. These advancements greatly accelerated the pace of biological discovery leading to the development of targeted therapies and immunotherapies, which have resulted in unprecedented survival benefits for patients. Additionally, they have also materialized in how disease is diagnosed and monitored, which increasingly involves genomic profiling of circulating tumor DNA (ctDNA) molecules in liquid biopsies such as blood. Despite the promise offered by these innovative therapies and monitoring tools, their broad integration into clinical practice faces important challenges. Widespread adoption of emerging therapies demands enhanced tools to successfully identify therapeutic targets and to restrict their potent activity to cancer cells. At the same time, improving the sensitivity of ctDNA-based tests, that remains limited by the scarcity of ctDNA in blood, holds the key to unlock the full potential of liquid biopsy across many important clinical applications.&#13;
&#13;
This thesis addresses these critical challenges by combining the unique opportunities posed by secreted molecules in the tumor microenvironment (TME) with engineering principles of signal amplification. Tumors are intricately tied to their local and systemic microenvironments for tumor progression, and key to orchestrating such interactions are secreted molecules. Targeting these abundant molecules that are accessible extracellularly and oftentimes even systemically, offers significant advantages over traditional approaches that target confined, scarce and occult malignant cells within tissues.&#13;
&#13;
In Part I of this thesis, we propose exploiting the catalytic activity of tumor-associated proteases in the local TME to selectively deliver potent therapies to cancer cells while sparing healthy tissues. Leveraging advancements in high throughput screening and in deep learning, we contribute important tools for the effective design of conditional drugs that require the cleavage of a protease substrate to unleash drug cytotoxicity. In Part II, we address challenges in ctDNA detection by introducing liquid biopsy priming agents - DNA-binding proteins and nanoparticles - that transiently attenuate endogenous ctDNA clearance routes. Priming agents synthetically amplify ctDNA levels in blood to greatly improve the sensitivity and the robustness of liquid biopsies. Our approach marks a paradigm shift in how we think about the limit of detection of molecular diagnostics and holds promise for other circulating biomarkers and beyond oncology. &#13;
&#13;
Collectively, this thesis presents a TME-centric perspective of cancer, coupled with engineering principles of signal amplification, to reframe therapeutic and diagnostic paradigms in oncology, with far-reaching implications across all stages of cancer management.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157872</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revealing SEI Formation and Evolution at the Li Anode/Liquid Electrolyte Interface in Li-ion Batteries by in situ Fourier Transform Infrared Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/157871</link>
<description>Revealing SEI Formation and Evolution at the Li Anode/Liquid Electrolyte Interface in Li-ion Batteries by in situ Fourier Transform Infrared Spectroscopy
Wang, Daniel
A novel in-situ FTIR method is developed to probe the Li anode/liquid electrolyte interface. Three different conventional electrolyte systems were tested: 1.2 M LiPF₆ in EC, 1.0 M LiPF₆ in EMC, and LP57 (1.0 M LiPF₆ in EC:EMC (3/7 vol %)). Using the spectroelectrochemical cell, FTIR measurements for first plating step and cycled cells (up to 50 cycles) were collected to look for new species formation. In the case of 1.2 M LiPF₆ in EC, LEMC formation was observed when the potential was brought below 1.50 VLi. LEMC growth accelerated when the potential was reduced below 0.0 VLi, upon contact with freshly plated Li metal. When 1.0 M LiPF₆ in EMC was used for the same study, either lithium methyl carbonate or lithium ethyl carbonate were formed. Upon switching to LP57, Li₂CO₃ became the dominant SEI component. When the three electrolytes were cycled in the spectroelectrochemical cell, the SEI peaks continued to grow for the first 10 cycles. After the first 10 cycles, LEMC and Li₂CO₃ growth plateaued, indicating SEI stabilization. On the other hand, LRC signal diminished, indicating an unstable SEI formed by EMC. Additionally, anion decomposition was observed to be more pronounced under high concentrations of EC. Since anion decomposition can be used as a proxy for LiF formation, high concentration electrolytes perform better possibly due to larger amounts of LiF formation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157871</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural case on adjuncts</title>
<link>https://hdl.handle.net/1721.1/157869</link>
<description>Structural case on adjuncts
Jou, Eunsun
This dissertation investigates how case is assigned to nominal adverbials dubbed durative and multiplicative in Korean. These adverbials express the duration of an event, or the number of times an event is repeated. In transitive, unergative, and unaccusative constructions, the adverbial is marked with accusative case. In psychological predicate constructions, the adverbial is marked with nominative case. Interestingly, in passive and inchoative constructions (grouped together under the term nonactive), the adverbial allows both nominative and accusative case.&#13;
I derive these patterns from a specific model of Voice, and a model of successive-cyclic Dependent Case. I first argue in favor of a Voice system that treats passive and inchoative constructions as syntactically equivalent: whether a nonactive construction is passive or inchoative is determined by the feature specification on Voice (Kallulli 2007). Furthermore, this nonactive Voice head introduces an implicit agent (for passives) or causer (for inchoatives), which can be optionally realized as a PP. This agent/causer at Spec, VoiceP competes with the theme argument to move to Spec, TP. Hence, there are two different structures that can arise in nonactive constructions. The other constructions that do not show case optionality lack this competition. In transitive, unergative, and unaccusative constructions, there is no implicit agent/causer at Spec, TP to compete with the theme argument. In psychological predicate constructions, the experiencer argument introduced at Spec, ApplP acts as an intervener and blocks the theme argument from competing with the implicit agent/causer.&#13;
&#13;
My model of successive-cyclic Dependent Case explains how the different structures result in different case patterns. It is a revised version of Levin’s (2017) original model, whereby case evaluation occurs not only at the end of the syntactic derivation but at the Spell-out of each phase. However, my version of the model involves a more relaxed locality constraint for dependent case assignment. I demonstrate how my model can not only derive the case marking patterns of durative and multiplicative adverbials, but can also account for other case phenomena in Korean such as case stacking and multiple nominative constructions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157869</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing the Intersectional Risks Associated with the Full Life Cycle of the U.S. Housing Stock</title>
<link>https://hdl.handle.net/1721.1/157868</link>
<description>Assessing the Intersectional Risks Associated with the Full Life Cycle of the U.S. Housing Stock
Manav, Ipek Bensu
This work presents the most comprehensive framework to date to assess the intersectional risks associated with design and policy decisions regarding the built environment. This framework is applied to decisions regarding the selection of hazard mitigation measures to apply, households to prioritize in hazard mitigation grant programs, and construction materials to use in efforts to reduce societal greenhouse gas (GHG) emissions.&#13;
&#13;
To study these decisions, a computation inexpensive method is developed to compute expected damages associated with each individual building in a community with hurricane wind exposure. This method is applied to study the cost burden of expected damages on each individual household. Later, this is integrated into building life cycle assessment (LCA) to incorporate hazard vulnerability into building embodied emissions. Lastly, building LCA is extended to inform the sectoral environmental footprint (SEF) of construction material sectors.&#13;
&#13;
Together, the model results of this work show that expected damages are currently underestimated, socially-vulnerable groups are likelier to be priced out hazard repairs, and ignoring use and end-of-life stages leads to ignoring the largest portion of building life cycle emissions as well as the largest contributors of the SEF of construction materials. By reevaluating the performance of the housing stock under each metric, strategies are proposed to prevent monetary damages, redistribute the cost burden of remaining monetary damages, and couple considerations for climate mitigation and climate adaptation by promoting disaster risk reduction as a pathway towards GHG abatement.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157868</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Habit Formation and Political Persuasion: A Behavioral and Statistical Approach</title>
<link>https://hdl.handle.net/1721.1/157867</link>
<description>Habit Formation and Political Persuasion: A Behavioral and Statistical Approach
Tohidi Kalorazi, Amir
This thesis explores the complex dynamics of human behavior across diverse contexts, integrating perspectives from behavioral science and statistical analysis. The central focus of this study revolves around the analysis of repetitive behavior in various scenarios including shopping, social media use, and news sharing.&#13;
&#13;
The initial study investigates the influence of habits on the in-store shopping experience. By leveraging store closures as a disruptive event, we examine how these closures prompt individuals to alter their purchasing patterns. We propose that such disruptions encourage people to engage in more deliberate decision-making processes, leading them to explore alternatives that they might have previously overlooked due to established habits. Employing a difference-in-differences framework, we estimate the causal impact of habits on brand loyalty. Our findings reveal a significant role of habits, with households exhibiting stronger habits experiencing a temporary disruption in their shopping routines following store closures. Over time, these households appear to develop new habits in different stores, resulting in lasting changes in preferred brands. This suggests that the formation of shopping habits can lead to suboptimal consumer behavior. These insights have practical implications for businesses, including pricing strategies, advertising approaches, and product placement within stores.&#13;
&#13;
The second study introduces an innovative methodology for quantifying habitual behavior in the context of social media usage. Interactions with social media platforms often yield psychological rewards, fostering the development of habitual behaviors driven by cue-response associations. By leveraging entropy as an implicit measure of behavioral regularity, this study aims to uncover the intricate relationship between habit formation and digital routines. Through empirical analyses, we establish the validity of the entropy metric, demonstrating its effectiveness in capturing distinct behavioral patterns beyond mere frequency. Our results highlight the nuanced connection between entropy and future app engagement, indicating a positive association for lower entropy values and a significant decline for excessively irregular patterns. These findings contribute to theoretical understanding of habitual behavior and offer practical insights for managing digital habits. Ultimately, this work advances our comprehension of how habits manifest in the digital realm and provides a robust tool for predicting long-term user behavior.&#13;
&#13;
The third study delves into the intricate interplay between individuals' beliefs and their ability to anticipate the persuasive impact of climate change news articles. The central aim is to determine whether climate change deniers or believers possess varying capacities to predict the persuasive consequences of articles emphasizing climate change severity. Through a series of surveys, we gather predictions about the impact of such articles on climate change deniers. Surprisingly, findings reveal discordant predictions: deniers anticipate a backfire effect among peers, climate believers foresee negligible effects. We rigorously test these predictions with a randomized survey experiment involving deniers, uncovering an unexpected positive opinion shift towards climate change after article exposure. Notably, this effect does not translate into discernible changes in stated or revealed support for climate change actions. In the context of the pressing climate challenge, our study offers insights to inform targeted communication and interventions that foster consensus and meaningful action.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157867</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Ethics within Metaphysics</title>
<link>https://hdl.handle.net/1721.1/157866</link>
<description>The Ethics within Metaphysics
Impagnatiello, Michele Odissea
This dissertation consists of three chapters at the intersection of ethics and metaphysics. In the first chapter, I put forward a new theory of personal identity, give arguments for it, and defend it from objections. In the first part, I argue that the two most prominent theories of personal identity, the psychological theory and the physical theory, do not satisfy some constraints on any acceptable theory: that personal identity be all-or-nothing, determinate, principled, and substantive. I then put forward a new theory, the phenomenal theory, on which personal identity is determined by the uninterrupted continuity of a stream of consciousness. I argue that this theory does satisfy all the desiderata, and is as such a better theory. In the second part, I argue that the phenomenal theory also solves the problem of fission cases, because there are no cases of phenomenal fission. In the third and last part, I consider the objection that, on the phenomenal theory, we do not survive interruptions of consciousness such as sleep; I argue that this objection doesn’t succeed in refuting the theory. In the second chapter, I generalize a debate about laws of nature to the domains of metaphysics and ethics. Patterns in the natural world lead us to the postulation of laws. A metaphysical dispute arises as to whether these laws are mere summaries of the mosaic (as the Humean would have it), or whether they govern the mosaic (as the Anti-Humean would have it). In this paper, I first argue that similarly, patterns in the metaphysical and ethical facts should lead us to the postulation of metaphysical and ethical laws, which are the proper subject of metaphysical and ethical inquiry. Then, I argue that the Humean/Anti-Humean debate also arises when it comes to metaphysical and ethical laws. Finally, I argue in favor of the Anti-Humean conception of metaphysical and ethical laws, both adapting standard arguments used in the debates about laws of nature, and with new arguments specific to metaphysics and ethics. In the third chapter, I investigate conflicts between ethics and metaphysics. Sometimes, a metaphysical theory has revisionary ethical consequences: for example, some have thought that modal realism entails that there are no moral obligations. In these cases, one may be tempted to reject the metaphysical theory on the grounds that it conflicts with commonsensical ethics. This is an ethics-to-metaphysics inference. My claim is that this inference is in general irrational, and that the fact that a metaphysical theory has highly revisionary ethical consequences is no reason at all to reject the theory. I argue for this claim on the basis of general epistemic principles about the transmission of justification, and what makes for a good argument. Furthermore, I argue that my account can explain why a certain narrow class of ethics-to-metaphysics inferences are rational.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157866</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multidimensional profiling of the Toxoplasma gondii proteome</title>
<link>https://hdl.handle.net/1721.1/157865</link>
<description>Multidimensional profiling of the Toxoplasma gondii proteome
Herneisen, Alice Lydia
Universally, external signals are transduced and propagated in cells by secondary messengers. In the asexual and replicating stages of apicomplexan parasites, these pathways initiate and sustain transitions within the lytic cycle responsible for parasite spread and pathogenesis. Among these early-branching parasitic protists are the etiologic agents of the widespread, persistent, and deadly human diseases malaria (Plasmodium spp.) and toxoplasmosis (Toxoplasma gondii), making the understanding of these parasite signaling pathways of global importance. Although components of secondary messenger signaling pathways are conserved among apicomplexans and higher eukaryotes, 800 million years of divergence from existing model organisms precludes identification of parasite-specific secondary messenger responses or a priori reconstruction of their signaling pathways.&#13;
&#13;
This thesis addresses that gap. I have adapted state-of-the-art proteomics methods to study the proteome of the model apicomplexan T. gondii across multiple dimensions: abundance, stability, time, and space. Chapter 2 describes how I employed thermal proteome profiling to identify the target of an antiparasitic compound, thereby enhancing our understanding of parasite calcium signaling pathways. In a conceptual leap, I applied this method to systematically identify calcium-responsive proteins on the basis of biochemical interactions with this second messenger in Chapter 3. From this analysis, the protein phosphatase PP1 emerged as an unanticipated calcium-responsive phosphatase along with dozens of novel proteins belonging to this critical signaling network.&#13;
&#13;
Signaling pathways communicate to orchestrate complex cellular processes, yet in apicomplexan parasites they are often studied in isolation. In Chapter 4, I identify a node linking three key second messenger pathways in T. gondii: calcium, cyclic GMP, and cyclic AMP. The apicomplexan-specific kinase SPARK regulates the AGC kinases PKG, PKA C1, and PKA C3, which together control transitions within the asexual cycle of this important family of parasites.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157865</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Machine Learning to Discover Biochemical Determinants of Physical Fitness</title>
<link>https://hdl.handle.net/1721.1/157864</link>
<description>Causal Machine Learning to Discover Biochemical Determinants of Physical Fitness
Nawaz, Hesham
Identifying the key pathways relevant to cardiorespiratory fitness is of great importance for both predicting exercise responsiveness and potentially finding which interventions are likely to affect it. While contemporary deep learning models have demonstrated great success in pattern recognition and generation for various data modalities, their ability to decipher the causal mechanisms underlying these patterns is limited. This work proposes and evaluates a methodology using state-of-the-art causal discovery and causal inference methods to uncover the relationships between different proteins and their impact on changes in individuals’ maximal oxygen consumption (a proxy for physical fitness).
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157864</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phonetic Faithfulness in Phonological Opacity</title>
<link>https://hdl.handle.net/1721.1/157863</link>
<description>Phonetic Faithfulness in Phonological Opacity
Kim, Yeong-Joon
This dissertation presents a novel approach to phonological opacity, which is grounded in new findings regarding substantive restrictions on the patterns of opaque interactions. The central thesis posits that phonological opacity functions to preserve the phonetic properties specified in the input of a phonological operation. Specifically, it argues that inputs are enriched with phonetic auditory features, and surface opacity emerges as a result of processing these enriched inputs. This proposal can be detailed as follows. First, processes that become opaque are initially biased by certain phonetic markedness conditions. Second, these phonetic biases, encoded in the phonetically enriched inputs, are mapped onto the nearest phonologically contrastive sounds to satisfy the requirement of phonetic faithfulness, resulting in surface phonological opacity.&#13;
 &#13;
This hypothesis yields a testable prediction: only phonetically natural processes, which possess an appropriate phonetic markedness condition, can become opaque. The results of typological surveys encompassing 87 counterfeeding and 65 counterbleeding interactions across languages support this prediction, revealing that opacified processes are subject to a narrow range of markedness conditions, such as coarticulatory assimilation (e.g., palatalization) and durational adjustments (e.g., segmental weakening). Other types of phonological processes, particularly non-natural ones, are only rarely, if ever, opacified. This asymmetry in the patterns of phonological opacity underscores that opaque interactions are not independent of phonetic substance. &#13;
&#13;
In addition to this main finding, it is also shown that the current proposal offers additional advantages in explaining phonological opacity. First, it successfully accounts for various non-typical opaque interactions such as feeding opacity and stress misapplications, alongside counterfeeding and counterbleeding interactions. The proposal also integrates various phonological phenomena, such as compensatory lengthening, coalescence, and incomplete neutralization, within the framework. Second, learning simulations using a weighted constraint version of the proposed model demonstrate that intermediate hidden structures, such as phonetically enriched inputs, can be learned when the mappings between abstract inputs and surface representations are established.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157863</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracing the Precursors and Amplifiers of Conflict in the Information Age: An NLP Inquiry of Tensions, Political Communication, and Misinformation</title>
<link>https://hdl.handle.net/1721.1/157862</link>
<description>Tracing the Precursors and Amplifiers of Conflict in the Information Age: An NLP Inquiry of Tensions, Political Communication, and Misinformation
Zimmer, Philipp
Violent conflicts, in their varied and complex forms, have long been a subject of research and political discourse. Despite increased attention for the field, various nuances and dynamics are yet to be explored. This thesis seeks to study three aspects of the multifaceted nature of conflicts through the lens of natural language processing (NLP), thereby not only offering new insights but also advancing the field's methodological landscape.&#13;
&#13;
First, the study delves into the identification of causal predictors of conflicts. By showcasing the potential of a frame-semantic parser, I am able to quantify the precursors that contribute to conflict and examine the potential for enhancing prediction models with greater qualitative depth. This chapter utilizes a rich but under-examined data source, news articles, which can aid closing the data gap in conflict studies.&#13;
&#13;
In the second chapter, the communication strategies of political leaders during crises are scrutinized to understand the rationale behind their messaging and the impact thereof. I argue that leaders' engagement frequency and style with their citizens is dependent on the political systems' characteristics and that it matters for societal conceptions.&#13;
&#13;
The final chapter addresses the spread of misinformation, such as in times of crisis, investigating which themes are prone to the widespread propagation on social media and presenting a novel ensemble method for the detection of misleading and false content.&#13;
&#13;
By integrating computational techniques with political theory, this work contributes to a nuanced understanding of conflict dynamics and offers rich potential for anticipatory actions of policymakers.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157862</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coevolving Cybersecurity Adversaries for Industrial Control Systems in Failure-Prone Environments</title>
<link>https://hdl.handle.net/1721.1/157861</link>
<description>Coevolving Cybersecurity Adversaries for Industrial Control Systems in Failure-Prone Environments
Wicks, Kathryn
As industrial control systems become universally integrated with software and connected to the internet, they have become targets for cyberattacks and sabotage. Detecting cyberattacks on these networks is difficult because existing datasets on attacks is minimal and the bulk of intrusion detection systems are designed for enterprise environments rather than industrial environments. In industrial environments, mechanical failures, stress states, and electrical problems are expected, with repairs included in daily operations. In enterprise environments, such failures are rarer and more high-impact as a result. We investigate the extent to which this mismatch in the impact of physical stressors failures degrades the ability of traditional intrusion detection algorithms to perform in the industrial environment. In the sub-area that this thesis focuses on, power microgrids, such disturbances can come in the form of line-line faults, line-ground faults, lack of generation capacity to meet demand, and unintentional islanding, among many others. Microgrids must be resilient to these events, and this thesis investigates to what extent they are currently and if they can be improved. Specifically, this thesis asks: do traditional IDSs cause false alarms when placed in a failure-prone environment? How do these intrusion detectors perform overall? Can they be improved with additional training? And finally, can intrusion detection systems be tricked by attacks which appear to be "benign" failure modes? This thesis answers these questions by comparing the performance of different anomaly detection methods on cyberattack datasets with varying levels of stressor complexity and severity, and finds that stress on an industrial system can degrade anomaly-based intrusion detector performance. Expanding on this idea, an attacker is then trained to adversarially mask a dataset, and a detector is co-evolved alongside it to detect the attacks. Finally, the coevolution is brought into the hardware-in-theloop simulation environment, where attackers and defenders act in real time to change the state of a realistic microgrid simulation. From these experiments, it is found that attackers can leverage grid disturbances to hide their actions, and that accurate realtime simulations are highly useful for identifying vulnerabilities in a cyberphysical system.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157861</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technological Innovation and Integration of Whole Brain Imaging, Olfactory Stimulation, and Correlative Microscopy in Larval Zebrafish</title>
<link>https://hdl.handle.net/1721.1/157835</link>
<description>Technological Innovation and Integration of Whole Brain Imaging, Olfactory Stimulation, and Correlative Microscopy in Larval Zebrafish
Swain, Corban N.
Achieving a deep understanding of the brain is a cross-disciplinary endeavor that requires the investigator to consider biomolecular, electrical, and sensory interactions across time and space at many scales. This understanding is important because a deeper understanding of the brain precedes advancements in efficient computing, generalizable frameworks for learning, and, of critical importance, the understanding and treatment of neurological diseases. Towards this end, this thesis presents novel approaches and technologies for whole-brain imaging, olfactory stimulation, and correlative imaging---i.e. the utilization and registration of multiple imaging modalities within a single sample. The overall objective of this thesis research is to not just create technologies, but to integrate them to enabler richer and more contextual understandings of the larval zebrafish's brain. &#13;
In this work we show novel light field microscopy algorithms that allow us to reconstruct 3D images from 2D micrographs with improved resolution to enable high-frame-rate recordings of whole-brain neural activity. We describe the designing and building of the first known system for multi-directional olfactory stimulation of larval zebrafish with up to ten separate odor channels. We demonstrate an optimized expansion microscopy-compatible immunostaining protocol for whole-mount zebrafish which preserves registration epitopes to move towards the neuron-level alignment of structural and functional data. And, finally, we showcase a set of proof-of-concept experiments and analyses which demonstrate our ability to integrate olfactory stimulation, whole-brain calcium imaging, behavioral recording, and structural staining in individual larva.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157835</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Existence and Analysis of a Rotating Stall Inception Continuum&#13;
&amp; Development of Concept Questions in Fluid Dynamics</title>
<link>https://hdl.handle.net/1721.1/157834</link>
<description>Existence and Analysis of a Rotating Stall Inception Continuum&#13;
&amp; Development of Concept Questions in Fluid Dynamics
Cherry, Maranda F.
This thesis presents two projects, an analysis of rotating stall inception for axial compressors in turbomachinery, and a description of the creation of Concept Questions for a text on internal flows. The first part of this thesis identifies flow behavior that defines two routes to rotating stall, known as modal and spike type rotating stall inception. It continues previous studies by MIT and the University of Cambridge surrounding unification of these two stall types under a dynamical system framework. Calculations were carried out for an isolated rotor, with a high hub to tip radius ratio, using TBLOCK, a Reynolds Averaged Navier Stokes solver. The results show (i) the dependence of stall inception on the compressor axisymmetric pressure rise characteristic and the characterization of mode and spike stall inception as two paths, located at the ends of a continuum of possible paths to stall. (ii) the effect of blade passage accelerations and asymmetry in the onset process, and (iii) the divergence of stall inception from two-dimensionality as a function of the slope of the total-to-static compressor pressure rise characteristic. The calculations show that compressor pressure rise characteristic slopes, dψ/dϕ, less than 0.3 have a stall cell growth rate, σ, that agrees with two-dimensional theory. The divergence of stall inception from two-dimensionality is suggested as a distinguishing feature of spike type stall inception compared to modal type stall inception. The second part of this thesis encompasses the creation, editing and compilation of Concept Questions for seven book chapters in a new text that describes the use of Concept Questions in teaching (and learning) fluid mechanics. The composition and qualities of a good concept question are defined, and the process of generating and editing questions for the intended audience is discussed.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157834</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing resource allocation in large communications satellite constellations</title>
<link>https://hdl.handle.net/1721.1/157833</link>
<description>Optimizing resource allocation in large communications satellite constellations
Pachler de la Osa, Nils
Satellite communications are becoming a key technology for maintaining connectivity in a world driven by information. In the recent years, established players (such as SES and Telesat), as well as new competitors (such as SpaceX and Amazon) have proposed constellations able to serve hundreds of thousands of users, using thousands of satellites. While the orbital configuration of each design is different, the next generation of satellite communications relies on highly flexible digital payloads, such as phased array antennas, on-board processing, and adaptive modulation and coding schemes. Several approaches have been proposed to deal with the complexity of the added flexibilities at the spacecraft level. Nevertheless, how to address the flexibilities at the constellation level, critical to operate the next generation of systems, remains an open question. This dissertation develops optimization-based decision-making frameworks for designing and operating the next generation of communication constellations. In particular, novel methods for the Beam Shaping, User Grouping, Satellite Routing, Frequency Assignment, and Gateway Routing problems are proposed, tailored for large non-geostationary orbit constellations with satellites at multiple altitudes, referred to as hybrid systems. The methods leverage optimization to find an optimized set of decisions that maximize capacity and quality of service and minimize necessary ground infrastructure, all while avoiding interference. The proposed methods are then combined, tested, and evaluated using existing constellation designs under representative operational conditions with hundreds of thousands of users. The reported results prove that the proposed techniques are able to multiply by two the capacity of these systems, with favorable trade-offs in quality of service and necessary ground infrastructure. By testing existing designs, it is concluded that the number of satellites, as well as the link quality are the main drivers of performance. Furthermore, the analysis shows that hybrid constellations offer advantages over other designs, thanks to the combination of high quality links on low altitude satellites, and high coverage on high altitude satellites. Additionally, this dissertation studies the optimal proportion of satellites across various altitudes in hybrid LEO-MEO constellations. Results show that hybrid constellations are desirable when the cost of MEO and LEO satellites are comparable and interference is minimal.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157833</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tools for Mapping the Links Between Stimuli, AffectiveStates, and Behavior through Whole-Brain Imaging inZebrafish Larvae</title>
<link>https://hdl.handle.net/1721.1/157832</link>
<description>Tools for Mapping the Links Between Stimuli, AffectiveStates, and Behavior through Whole-Brain Imaging inZebrafish Larvae
Zhang, Caroline Lige
Affective states, often referred to as emotional states, exert substantial influence on behavior and decision-making processes. Traditionally, researchers have turned to functional imaging to delve into the neural mechanisms that drive both behavior and decision making. However, functional imaging of behaving animals often focuses on a singular brain region. Whole-brain imaging, on the other hand, has the capacity to significantly advance our understanding of the brain's functional architecture. In this pursuit, zebrafish larvae emerge as an ideal model for whole-brain imaging due to their transparency, small size, genetic manipulability, rapid development, high reproducibility, Recent advances in protein engineering and fluorescence microscopy have empowered researchers to observe neural activity across extensive neuronal populations. Genetically Encoded Calcium Indicators (GECIs) and Genetically Encoded Voltage Indicators (GEVIs) provide the means to probe brain dynamics with single-cell precision. The advent of lightsheet microscopy technologies has further enriched our capabilities, enabling the recording of brain activity at remarkable frame rates, ranging from several hundred to several thousand frames per second, all while the animal is exposed to precise visual, auditory, and/or olfactory stimulation. Leveraging these experimental advancements in conjunction with machine learning and computer vision techniques, our study aims to forge connections between stimulation, neural activity, and behavior through a larval zebrafish model.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157832</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Design of Molecular Nanostructures for Exciton Control</title>
<link>https://hdl.handle.net/1721.1/157831</link>
<description>Theoretical Design of Molecular Nanostructures for Exciton Control
Castellanos, Maria A.
Organic semiconductors comprised of strongly-coupled chromophores harness control of delocalized excitations, or excitons, via programmed molecular structures. The dynamics of these excitons enable energy and information transfer within molecular networks, positioning chromophore assemblies as ideal candidates for a number of technologies such as solar energy conversion, nanoelectronics, and quantum computing. Despite significant advancements, there exists no universal model that can explain the dependence of exciton photophysics on molecular morphology. This thesis employs mathematical and atomistic models to contribute key physical insights into the interdependencies between chromophore spatial organization and exciton dynamics, shaped by inter-chromophore couplings and interactions with the thermal bath.&#13;
&#13;
In the first part, a Frenkel Exciton-based model is introduced as a strategy for studying exciton evolution between precisely arranged chromophores. In Chapter 2, I develop a novel approach to map unitary quantum computing operations to Hamiltonians describing excitonic circuits in the presence of a model bath. Then, Chapter 3 scales this framework to complex quantum algorithms represented by explicit molecular systems. Finally, Chapter 4 presents an innovative molecular approach for directing exciton flow via geometrical phase in tightly-bound chromophore arrays. &#13;
&#13;
The second part delves into the intricacies of exciton interaction in densely packed molecular systems arranged within DNA scaffolds. Chapter 5 combines molecular dynamics and quantum mechanical calculations, further validated by experimental results, to study the interplay between long-range electrostatic and short-range charge transfer interactions. Chapter 6 then correlates this interplay with geometrical configurations derived from the DNA scaffolding. This thesis culminates in Chapter 7, which introduces a computational pipeline designed to leverage the precise control over excitons afforded by macromolecular frameworks, paving the way for custom-tailored DNA-based excitonic circuits.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157831</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>High field dynamic nuclear polarization methods: Microwave sources and mechanisms</title>
<link>https://hdl.handle.net/1721.1/157830</link>
<description>High field dynamic nuclear polarization methods: Microwave sources and mechanisms
Mardini, Michael
In thirty years of active development, dynamic nuclear polarization (DNP) has emerged as a forefront technique for expanding the scope of solid state nuclear magnetic resonance. For the most part, and particularly at high fields, these advances have come with continuous-wave microwave irradiation and the introduction of nitroxide-based biradicals exploiting the cross effect mechanism. In this thesis, I argue that this approach is not necessarily optimal and report progress towards arbitrary-waveform DNP, in the construction of a suitable solid-state microwave source, and the use of narrow-line monoradicals exploiting the Overhauser effect. My colleagues and I have also investigated the Overhauser mechanism through selective deuteration of radicals, leading to a relatively simple modification which yielded a significant increase in Overhauser enhancement. Finally, I detail studies of two unexplored DNP mechanisms in trityl: the three-spin solid effect and resonant mixing. With solid-state microwave sources and Overhauser radicals, DNP is now more accessible as we can achieve reasonable enhancement without the need for a gyrotron. Moreover, as amplifier and resonator technologies continue to develop, it is likely that pulsed DNP will emerge at high fields and overtake continuous-wave DNP in absolute sensitivity enhancement as well.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157830</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for the Study of Galactofuranose in Mycobacteria</title>
<link>https://hdl.handle.net/1721.1/157829</link>
<description>Methods for the Study of Galactofuranose in Mycobacteria
Taylor, Katherine I.
Despite the energy costs associated with deploying furanose sugars over their more stable pyranose counterparts, galactofuranose is prevalent across nature from commensals to human pathogens. However, it is conspicuously absent from human cells, thus establishing its biosynthetic pathways as important potential drug targets. Our knowledge about the biological roles of galactofuranose in cells is hampered by a dearth of methods by which to study it. Access to glycans containing galactofuranose is limited by the unfavorable equilibrium between pyranose and furanose forms of the sugar, leading to low-yielding and synthetically arduous routes to galactofuranose glycans and its high-energy nucleotide sugar donor, used in biochemical experiments to probe the activity and kinetics of galactofuranose glycosyltransferases. Moreover, study of carbohydrate structures within the cell is limited by the lack of methods to selectively modify glycans with functional handles. Finally, study of the biosynthetic machinery for galactofuranose biosynthesis and inhibitors thereof is limited by their relatively weak affinities for their ligands, providing a challenge for selective chemical probes. In this work, we describe three methods to address these challenges and expedite the study of galactofuranose-containing glycans, their biological function, and their biosynthetic machinery. First, we developed a method to produce the rare high-energy sugar donor UDP-galactofuranose in situ for facile preparation of the mycobacterial galactan utilizing the sugar mutase UDP-galactopyranose mutase. We used this method to generate up to 10 milligrams of polymer and demonstrated that it could be selectively functionalized. Second, we leveraged the rapidly expanding set of biosynthetic probes of the mycobacterial cell wall to rapidly characterize intracellular distances between distinct layers of the cell wall in Mycobacterium tuberculosis model organisms Corynebacterium glutamicum and Mycobacterium smegmatis using fluorescence resonance energy transfer. We evaluated strains with varying galactan structures and compared our data to previous characterization of the cell wall to assess the method’s utility. Finally, we characterized the kinetics of a mild electrophile, the squaric ester, and assessed its utility in selectively binding and modifying a key galactofuranosyl transferase involved in mycobacterial cell wall biosynthesis. Taken together, these findings present a suite of methods to expedite the exploration of galactofuranose structure and function within a relevant pathogen and lays groundwork for further study of galactofuranose across other organisms.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157829</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Tip Clearance and Surface Roughness on Small-Scale Turbopump Impeller Performance</title>
<link>https://hdl.handle.net/1721.1/157828</link>
<description>Effects of Tip Clearance and Surface Roughness on Small-Scale Turbopump Impeller Performance
Ruecker, Kinjal A. L.
Centimeter-scale turbopump impellers typically used in liquid rocket engines of small launch vehicles suffer from reduced performance due to manufacturing challenges and nonuniform geometric scaling. This thesis aims to characterize the impact of impeller blade tip clearance and surface roughness on the performance of small-scale turbopump impellers by assessing the dominant flow features, quantifying the underlying loss mechanisms, and determining the sensitivity of performance losses to changes in tip clearance and surface roughness. The study identifies the primary flow features governing impeller performance to be blade tip leakage flow and secondary flow. The analysis identified two distinct flow regimes based on tip clearance: above 5% of tip clearance, the losses are predominantly due to blade tip leakage flow, whereas below this threshold, losses are governed by both secondary flow and blade tip leakage flow. For tip clearances above 5% of the blade span, blade tip leakage flow is estimated to contribute more than 80% of total impeller loss. A 1% change in tip clearance is estimated to result in a 0.8% loss in efficiency. The calculations suggest increasing surface roughness reduces the effective tip clearance due to increased viscous effects in the tip gap, but strengthens the secondary flow. This lowers the effective tip clearance that separates the flow regimes. The contribution of blade tip leakage loss to total impeller loss decreases by up to 22% for surface roughness increased from an Rₐ value of 1 µm to 10 µm. The strengthened secondary flow at higher surface roughness increases mixing of the blade tip leakage flow with the blade passage flow, leading to larger regions of blockage. Increasing the surface roughness from an Rₐ value of 1 µm to 10 µm results in a 4% loss in impeller efficiency. This study demonstrates that surface roughness is more impactful on small-scale impeller performance than blade tip clearance, and so manufacturing for smooth surfaces should be prioritized over reducing the blade tip clearance gap.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157828</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Mechanical Counter Pressure Spacesuits and Compression Garments: Active Pressurization and Design for Mobility</title>
<link>https://hdl.handle.net/1721.1/157827</link>
<description>Engineering Mechanical Counter Pressure Spacesuits and Compression Garments: Active Pressurization and Design for Mobility
Kothakonda, Akshay
Extravehicular activities (EVA’s) are an essential and integral part of human exploration of space, with their use ranging from performing scientific experiments on a planetary surface to assembling space stations. Spacesuits must provide an astronaut with the conditions necessary to survive an EVA for several hours and to enable them to carry out these complex tasks. One of the main factors impeding the effectiveness of EVA operations is the stiffness of the spacesuits, which is inherent in gas pressurized suits.  While there have been several engineering advances in improving the mobility of gas pressure suits, a mechanical counter pressure (MCP) suit seeks to significantly improve mobility and minimize metabolic workload by replacing gas pressure with contact pressure of a tensioned fabric against the body. Although a marked improvement in mobility of MCP suits over traditional gas pressurized suits was demonstrated in the 1970’s with the Space Activity Suit, engineering challenges remain before this such a suit can be used operationally. Applications of the MCP suit concept extends to compression garments for athletic and medical use.  This thesis seeks to address some of the fundamental requirements of an MCP suit. These include providing uniform MCP of 29.6 kPa over the body, minimizing mechanical work during suited movements, and enabling easy don and doff. While the thesis focuses on the single degree-of-freedom arm section of the suit, the work can be extrapolated to the entire body.  The bidirectional actuation of two-way Shape Memory Polymers (2W-SMP) is leveraged to both provide MCP and allow for easy don/doff. This is achieved by reversing actuation via thermal stimulus. Two designs of MCP suit as an assembly of suit fabric, 2W-SMP, and elastomers are conceived and analysis is carried out to select the more feasible design. On the selected design, analysis is conducted to select a 2W-SMP with maximum MCP for a given donning effort.  Two types of suit fabrics are analyzed in the design- the woven fabric and the jersey knit fabric. Nonlinear finite element analysis (FEA) models that can be used to analyze the deformation of these fabrics under suited movements have been developed. Results from these simulations are hoped to aid in designing the fabric in a such a way that sustains circumferential tension of 2 the 2W-SMP and minimizes mechanical work during movements. While the former would require the fabric to be stiff along the circumferential direction, the latter can be achieved by aligning the compliant axes of a given fabric with the directions of first principal strain. The process of estimating the optimum fabric pattern will be iterative and the resulting pattern may comprise of a composite of different fabric types with varying parameters. Mapping skin Lines of Non-Extension (LoNE’s) informs contours for inextensible cables in such a way that they do not impede movements. These cables may form part of suit life support system.  This thesis focuses on developing tools such as methodology for sizing SMP’s, fabric models, and LoNE’s and as such does not use those tools to arrive at an optimum suit design. Utilization of these tools towards suit design is one of the future tasks in this work. Additionally, the author expects that future research efforts in this area at large will benefit from these tools.  The thesis includes an introduction of the problem and the motivation, background and literature review of relevant concepts, a deep dive into analysis and tests on shape memory polymer materials and their use for compression devices, development of fabric numerical models, and finally a discussion of the work and a summary of contributions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157827</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Authenticity in the Workplace:  What does it really mean?</title>
<link>https://hdl.handle.net/1721.1/157826</link>
<description>Authenticity in the Workplace:  What does it really mean?
Pervaaz, Viquar A.
Recently, the word authenticity has been used quite prevalently in organizations, specifically as an attribute needed in leaders.  However, during the pandemic, the use of the word authenticity became more prominent and organizationally universal. While the term is great in concept, the power of the word “authenticity” remains nebulous.  This poses a potential problem for organizations and teams as it presents the risk of not delivering on this commitment if the elements of authenticity are not defined and understood.  Making a promise of authenticity without delivering on it may have a negative impact on the individual and organizational morale/culture and a longer-ranging impact in terms of employee engagement and retention.  Using the lens of the cognitive dissonance theory as a construct to view authenticity as a “product” from a marketing perspective, one has a framework to postulate that if expectations are not clear and the perceived performance (delivery on the promise of specific elements of authenticity) is not optimal, then there will be ramifications of this in terms of satisfaction (e.g. employee engagement).  This paper will explore why defining this word in an organizational context is important, what are the macro dimensions of authenticity to help frame and define it, and what variables contribute to bringing authenticity to life.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157826</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limits to extreme event forecasting in chaotic systems</title>
<link>https://hdl.handle.net/1721.1/157825</link>
<description>Limits to extreme event forecasting in chaotic systems
Yuan, Yuan
Predicting extreme events in chaotic systems, characterized by rare but intensely fluctuating properties, is of great importance due to their impact on the performance and reliability of a wide range of systems. Some examples include weather forecasting, traffic management, power grid operations, and financial market analysis, to name a few. Methods of increasing sophistication have been developed to forecast events in these systems. However, the boundaries that define the maximum accuracy of forecasting tools are still largely unexplored from a theoretical standpoint. Here, we address the question: What is the minimum possible error in the prediction of extreme events in complex, chaotic systems? We derive the minimum probability of error in extreme event forecasting along with its information-theoretic lower and upper bounds. These bounds are universal for a given problem, in that they hold regardless of the modeling approach for extreme event prediction: from traditional linear regressions to sophisticated neural network models. The limits in predictability are obtained from the cost-sensitive Fano’s and Hellman’s inequalities using the Rényi entropy. The results are also connected to Takens’ embedding theorem using the information can’t hurt inequality. Finally, the probability of error for a forecasting model is decomposed into three sources: uncertainty in the initial conditions, hidden variables, and suboptimal modeling assumptions. The latter allows us to assess whether prediction models are operating near their maximum theoretical performance or if further improvements are possible. The bounds are applied to the prediction of extreme events in the Rössler system and the Kolmogorov flow.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157825</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Urban Building Energy Modeling</title>
<link>https://hdl.handle.net/1721.1/157824</link>
<description>Accelerating Urban Building Energy Modeling
Le Hong, Zoe; Wolk, Samuel
Enabling data-driven decision-making in the built environment is critical to achieving ambitious and urgent decarbonization goals. In the building sector, urban building energy models (UBEMs) have become a valuable tool for jurisdictions to develop evidence-based retrofitting policies, but dynamically exploring solutions is hampered by the computational expense and organizational overhead of physics-based building energy models. In order to address these challenges, we present a fast, flexible, and comprehensive UBEM methodology which can be used to reduce identified barriers to time-sensitive decision-making in building stock decarbonization spheres. The methodology combines the speed of current data-driven approaches with the flexibility of computationally intensive, but accurate, engineering models. Identifying machine learning methods as a viable approach, we implement convolutional neural networks (CNNs) which embed timeseries from hourly weather data and building schedules; the embeddings are then combined with static building characteristics and projected to monthly heating and cooling loads. The proposed approach allows for programmatic flexibility and robustness to unique hourly weather conditions globally, while contextual abstraction enables geometric independence. A dataset of over 1 million detailed thermodynamics-based simulations was constructed to train and validate the surrogate model. Model results at the individual shoebox, building, and urban scales compare favorably to traditional numerical methods and meet accepted error bounds under national energy simulation standards.  Additional validation at the urban- and national-scales are performed using public building simulation datasets.  We then demonstrate expanded applications, which leverage the reduced computational cost of the framework to make traditionally infeasible analysis modes tractable and deployable. The methodology presented is intended to be utilized for both very-large-scale systematic analysis and near-real-time interactive explorations. In developing this framework, we aim to provide new mechanisms for key stakeholders in the decarbonization effort to quickly generate actionable insights and engage in iterative discussions to develop evidence-based policy across global building stocks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157824</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relative Robot Localization and Frame Alignment for Multi-Robot Collaboration</title>
<link>https://hdl.handle.net/1721.1/157823</link>
<description>Relative Robot Localization and Frame Alignment for Multi-Robot Collaboration
Peterson, Mason B.
The growing field of collaborative robotics has the potential to enable and improve the execution of many challenging robot applications. For instance, with teamwork between multiple agents, dynamic object tracking can more completely cover an environment and trajectory planning becomes safer. However, for robots to share the quickly changing spatial information involved in these tasks, robots need to be able to express information originally sensed or planned in their own frame into the frame of neighboring agents. This can be challenging in cases where robots have no global pose information resulting in steady accumulation of error, or drift, in their local pose estimates. To mitigate the effects of drift, neighboring agents must make up-to-date estimates of the alignment between their frames, which can be difficult due to ambiguous alignments and the presence of outlier measurements. To address these issues, the first contribution of this thesis is a method for performing fast incremental frame alignment between pairs of robots, enabling collaborative multiple object tracking (MOT), the task of monitoring the locations of dynamic objects in an environment. To perform frame alignment, robots build up maps of recently seen static objects and use these maps and the detections of tracked dynamic objects to correct for frame drift. Using frame alignment estimates, agents share object detection information and account for additional uncertainty associated with the alignment estimate. The second contribution of this thesis presents a method to perform frame alignment with no initial guess. Many potential frame alignments are computed and we develop a filter that uses temporal consistency to reject outlier alignments and only accept a series of alignments that are consistent over time. We demonstrate in hardware experiments our ability to perform frame alignment in difficult scenarios and improve the quality of collaborative object tracking onboard real robots.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157823</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Future Space Debris Population and Orbital Capacity</title>
<link>https://hdl.handle.net/1721.1/157822</link>
<description>Modeling the Future Space Debris Population and Orbital Capacity
Jang, Daniel
Increased investments and technological advances in satellite manufacturing and launch services have led to a newly vitalized Low Earth Orbit (LEO) environment. Megaconstellations consisting of hundreds to hundreds of thousands of satellites have been proposed, with SpaceX’s Starlink satellite constellation now reaching more than 5400 operational satellites. This denser LEO environment underscores the urgent need for models to predict and manage the risk of collisions and the sustainable use of space. Many models have been proposed over the years to quantify the risk of collisions between resident space objects, including the seminal paper by Kessler that described the runaway conditions for which LEO could become unusable. In this thesis, the development of the MIT Orbital Capacity Analysis Tool (MOCAT) is described along with conclusions and insights. MOCAT is a novel open-source approach to evaluating the LEO environment and comprises of a Source Sink Evolutionary Model (SSEM) and a Monte Carlo (MC) method. The SSEM simplifies the complex dynamics of space-object interactions into deterministic equations, focusing on the long-term evolution of orbital populations across different altitude shells. The simplified nature of the SSEM allows for computational efficiency, which enables optimization routines such as the exploration of equilibrium solutions for LEO carrying capacity. The improvements to the SSEM in this work through binning in the physical dimension as well as inclusion of Delta-V dynamics from the collision dynamics increases the fidelity of the SSEM. In comparison, MOCAT-MC offers a comprehensive means to simulate the individual interactions between RSOs. The MOCAT-MC tool propagates the orbits of low-earth orbit objects and models their interactions including collisions and explosions, and provides insights into the evolving trends of the LEO population. Of particular note is the computational efficiency of the model, which is essential for managing the complexities inherent in orbital dynamics and the potential large number of objects centuries into the future. Validation results and a range of simulations, including no-future launch scenarios and the launch of proposed megaconstellations totaling more than 80,000 active payloads are explored, resulting in millions of trackable objects. Despite the much fewer megaconstellations planned at the higher altitudes, even a small fraction of failures in post-mission disposal or collision avoidance maneuvers result in an outsized effect on orbital debris accumulation. MOCAT-MC is able to simulate Lethal Non-Trackable (LNT) objects, which comprise the vast majority of the orbital population today. These lethal non-trackable object population will only grow as more payloads and debris are launched into orbit and increase the collision rate. The effect of these objects are modeled and discussed. These two models offer different approaches to modeling the future orbital environment each with its strengths and weaknesses. Validation against existing models in literature shows the utility of MOCAT in informing future space traffic management and constellation design. The MOCAT tool has been created such that researchers can use a common model that is validated, robust, and efficient, allowing for advancement in our ability to forecast and mitigate the risks associated with the increasing density of LEO while advocating for a more sustainable approach to space exploration and utilization.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157822</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Principles from Cognitive Science to Analyze and Guide Language-Related Neural Networks</title>
<link>https://hdl.handle.net/1721.1/157821</link>
<description>Using Principles from Cognitive Science to Analyze and Guide Language-Related Neural Networks
Tucker, Mycal
Natural language, while central to human experience, is not uniquely the domain of humans. AI systems, typically neural networks, exhibit startling language processing capabilities from generating plausible text to modeling simplified language evolution. To what extent are such AI models learning language in a “human-like” way? Defining “human-like” generally may be an impossible problem, but narrower definitions of aspects of human-like language processing, borrowed from cognitive science literature, afford metrics for evaluating AI models. In this thesis, I borrow two theories about human language processing for such analysis. First, human naming systems (e.g., a language’s words for colors such as “red” or “blue”) appear near-optimal in an informationtheoretic sense of compressing meaning into a small number of words; I ask how one might train AI systems that behave similarly. Second, people understand and produce language according to hierarchical representations of structure; I study whether large language models use similar representations in predicting text. Thus, in this thesis, I show how to train and analyze neural networks according to cognitive theories of human language processing. In myfirst branch of work, I introduce a method for neural network agents to communicate according to cognitively-motivated pressures for utility, informativeness, and complexity. Utility represents a measure of task success and induces task-specific communication; informativeness is a task-agnostic measure of how well listeners understand speakers and leads to generalizable communication; complexity captures how many bits are allocated for communication and can lead to simpler communication systems. All three terms are important for human-like communication. In experiments, training artificial agents according to different tradeoffs between these properties led them to learn different naming systems that closely aligned with existing natural languages. In my second branch of work, rather than training neural agents from scratch, I probe pre-trained language models and found that some use representations of syntax in making predictions. Humans use hierarchical representations of sentence structure in understanding and producing language, but it is unclear if large language models, trained on simple tasks like next-word-prediction, should learn similar representations. I introduce a causal probing method that sheds light on this topic. By creating counterfactual representations 3of syntactically ambiguous sentences, I measured how model predictions changed for different structural interpretations of the same sentence. For example, I recorded model predictions to ambiguous inputs like “The girl saw the boy with the telescope. Who had the telescope?” with different syntactic structures. For some (but not all) models, I found that models use representations of syntax (e.g., change their answers to the previous question). Thus, I offer novel insight into pre-trained models and a new method for studying such models for other properties. Thetwohalvesofmythesisrepresentcomplementaryapproachestowardsmorehumanlike AI; training new models and analyzing pre-trained ones closes an AI development feedback loop. In this thesis, I explain my contributions to both parts of this loop and identify promising directions for future research.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157821</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Database and Application Programming Interface Development for Rotational Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/157820</link>
<description>Database and Application Programming Interface Development for Rotational Spectroscopy
Cheung, Jasmine So Yee
The Species-agnostic Automated Gas Analyzer (SAAGA) project aims to automate the detection and characterization of chemical compounds in a complex chemical mixture in the gas phase through experimental rotational spectroscopy and&#13;
computational tools. A database of spectroscopic data serves as the foundation of the automation pipeline for assigning&#13;
spectral lines to species. While there are existing databases available for use, we developed our custom database, named&#13;
SAAGAdb, and an application programming interface (API) to access the database to fulfill the needs of SAAGA.&#13;
SAAGAdb is designed to store structured, high quality spectroscopic data of all species not limited to astrochemically&#13;
relevant ones, enabling convenient data manipulation, integration into future automation pipelines, deployment, and&#13;
maintenance. We implemented software development best practices, including software development life cycle, continuous&#13;
integration/continuous delivery, and version control, to develop a PostgreSQL database with a Python API built on Django&#13;
with RDKit integration. The product passed all unit tests and was successfully seeded with data. With the flexibility&#13;
provided by the Django framework as well as detailed documentation of the software, SAAGAdb and its API can be easily improved and expanded in the future to suit the needs of the SAAGA project.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157820</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Kinetics and Nonequilibrium Thermodynamics of Driven Systems: Stochastic Methods and Applications to Single-Molecule Biophysics</title>
<link>https://hdl.handle.net/1721.1/157819</link>
<description>Statistical Kinetics and Nonequilibrium Thermodynamics of Driven Systems: Stochastic Methods and Applications to Single-Molecule Biophysics
Piephoff, D. Evan
Advances in condensed-phase spectroscopy have permitted the ability to obtain time traces of biomolecules at the single-molecule level of detail. These real-time trajectories provide details that are typically unavailable in ensemble-averaged experiments, such as the effect of conformational dynamics on enzymatic reactions. From a theoretical perspective, it is therefore valuable to develop kinetic approaches for characterizing measurable quantities in order to connect to such single-molecule experiments. In this thesis, we analyze the statistical kinetics and nonequilibrium thermodynamics of driven biomolecular systems, with a particular emphasis on enzymatic processes. Specifically, we focus on kinetic methodology development; analyzing single-molecule fluctuations for mechanistic insight; examining the modulating influence of conformational interconversion on enzyme catalysis; and characterizing the nonequilibrium thermodynamics of generalized biomolecular machines. For enzymatic turnover reactions, it is found that the turnover rate reduces to the celebrated Michaelis–Menten functional form when conformational detailed balance is satisfied. In the presence of non-vanishing conformational currents, we predict and characterize the rich, cooperative behaviors attainable in conformational nonequilibrium. In addition, enzyme turnover fluctuations are analyzed by studying the Poisson indicator, a normalized measure of stochastic variation. A novel pathway analysis framework is extended to nonrenewal processes (i.e., those with correlated inter-event times) and fully reversible processes, accounting for kinetic network complexities, nontrivial event-averaged initial conditions, and the constraints associated with microscopic reversibility. For a dynamically disordered biomolecular machine involving an observable process coupled to a hidden process, a recently derived time-based fluctuation theorem no longer applies to the observable first-passage time; however, using a stochastic thermodynamics approach to examine fluctuating trajectories, we find that its validity is restored in the absence of hidden flux through the initial state manifold. Thus, the violation of this relation serves as an experimentally verifiable signature of hidden detailed balance breaking. The analysis presented herein provides a novel framework for analyzing a variety of kinetic processes, including enzyme turnover, molecular motor translocation, ion transport, and fluorescence emission.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157819</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on stereospecific diketopiperazine oxidation and applications to the synthesis of complex epidithiodiketopiperazines</title>
<link>https://hdl.handle.net/1721.1/157818</link>
<description>Studies on stereospecific diketopiperazine oxidation and applications to the synthesis of complex epidithiodiketopiperazines
Walker, Katherine L.
I. Introduction and Background on Epidithiodiketopiperazines &#13;
&#13;
A brief history and summary of methods for synthesis of epidithiodiketopiperazines (ETPs) are discussed. Three hypotheses for the mechanism of action of these biologically active natural products are reviewed, and the unified biosynthetic hypothesis that our group disclosed is summarized. The total syntheses of the natural product hyalodendrin are analyzed as a case study of the total synthesis of ETPs, and representative examples of our group’s entries into the synthesis of complex ETPs are examined.&#13;
&#13;
II. Studies on Stereospecific Diketopiperazine C–H Hydroxylation&#13;
&#13;
Mechanistic investigation of the permanganate-mediated hydroxylation reaction of 2,5-diketopiperazines (DKPs) is discussed. The course of the hydroxylation reaction with three permanganate oxidants examined in our total synthesis of naturally occurring epipolythiodiketopiperazines (ETPs) is investigated with respect to the activity of the different oxidants, as well as the stereochemical outcome and the configurational stability of the product diols. An example of a subsequent thiolation was then demonstrated to proceed under retention of stereochemistry, in contrast to the stereoinvertive thiolations previously observed in several total syntheses. The data is supported by computational analyses.&#13;
&#13;
III. Progress Toward the Total Synthesis of (+)-Chetomin&#13;
&#13;
We describe our work toward the total synthesis of ETP natural product (+)-chetomin. Key features of the synthetic progress include a method for construction of the key nitrogen¬–¬carbon bond with advanced reaction partners, including protected diols, sulfides, and ETPs, and stereocontrolled thiolation strategies. The challenges remaining to access (+)-chetomin are addressed on model systems.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157818</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of Synthetic Proteins Produced via Automated Fast-Flow Peptide Synthesis</title>
<link>https://hdl.handle.net/1721.1/157817</link>
<description>Investigation of Synthetic Proteins Produced via Automated Fast-Flow Peptide Synthesis
Cowfer, Amanda Elizabeth
Flow chemistry techniques and methods have given the broad scientific community high-fidelity access to chemical compounds with minimal effort compared to traditional synthetic techniques. Since the introduction of solid phase peptide synthesis (SPPS), the peptide community has endeavored to combine the convenience of flow chemistry with the iterative steps associated with peptide elongation in SPPS. Nearly one decade ago, members of the Pentelute lab envisioned and developed a flow-based peptide synthesizer, the Automated Fast-Flow Peptide Synthesizer, or AFPS for short. This technology enabled fast, reliable access to short peptide chains, with each coupling taking less than 3 minutes in total, significantly decreasing the labor needed to produce these peptides. However, peptide chains over 50 amino acids remained challenging to produce via AFPS, microwave synthesis, or traditional SPPS batch couplings. With modern research requiring rapid and high-fidelity access to long polypeptide chains, an immediate need to develop peptide synthesis technology to produce single-domain protein polypeptides in a single shot will be critical. Herein, I report on the arduous journey and unmatched teamwork needed to improve the AFPS systems for regular, reliable access to polypeptide chains of more than 200 amino acids in a single working day. In addition, I will highlight the workflow and knowledge needed to take a free polypeptide chain to a fully folded and biologically active protein, equivalent in form and function to its recombinant counterparts. I will discuss the iterative steps my team took to vary both chemical and mechanical and control variables to improve per-coupling yield enough to enable access to full-length single-domain proteins. On this journey, we utilized test peptides to validate synthesis quality and later synthesized a suite of full-length single-domain biologically active proteins. I will spend some time focusing on the barnase-barstar binding pair. Next, I will dive into how I build and design each AFPS synthesizer to improve synthesis outcomes and user-friendliness while retaining the core functionality and customizability that have made the AFPS so successful in the Pentelute lab. I will highlight my role in the renovation of the first generation AFPS system, the “Automatide,” and dive into the key characteristics that set our synthesizers apart from what is currently commercially available. Finally, we report on the synthesis and characterization of several small and very interesting luciferases. Luciferases are proteins that produce bioluminescence when exposed to specific chemical substrates, and for the organisms that produce these enzymes, they play a vital role in mating, defense, and camouflage. In the research arena, luciferases have had broad applications for decades, including detection of environmental contaminants, diagnosis of pathogens, high-throughput screening for drug discovery, understanding protein-protein interactions, and more. Current efforts in the field have focused on the development of small artificial luciferases due to their many advantages over traditional larger luciferases, such as enhanced stability and increased brightness. Herein, we report on the synthesis and characterization of the copepod, Gaussia priceps, luciferase GLuc (18 kDa), and artificial luciferases picALuc (12 kDa) and LuxSit-I (14 kDa). In addition, we synthesized the mirror-image counterpart of picALuc due to its potential for broad-reaching impact in health and diagnostics; this is the first reported mirror-image bioluminescent luciferase. Finally, we will report on our efforts to develop a split-picAluc protein complement assay (PCA) using AS-MS technology, which will be the smallest and most versatile split-luciferase reported to date. In summary, fast-flow peptide synthesis was utilized to produce and investigate several biologically relevant proteins to improve upon existing tools available to the broad chemistry and biology community.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157817</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Efficient Planning for Navigation using Global Information in Large and Uncertain Environments</title>
<link>https://hdl.handle.net/1721.1/157816</link>
<description>Towards Efficient Planning for Navigation using Global Information in Large and Uncertain Environments
Kurtz, Martina Stadler
We would like to enable a team of robots to navigate quickly and efficiently in large and uncertain outdoor environments. We hypothesize that in such environments, global, uncertainty-aware information is necessary to enable high-quality planning. However, most existing systems do not model or plan using global, uncertainty-aware information. For example, many planners assume access to complete global information in the form of full environment maps, or they assume that locally good planning decisions under uncertainty will result in globally good planning outcomes. To enable the use of global information for planning in large and uncertain environments, we must develop models that concisely represent key navigation features of the environment, and build planners that are capable of reasoning efficiently about global information. In this thesis, we design models and planners that use global information in large and uncertain environments to increase the efficiency and quality of planning for navigation. We present four contributions towards using global information for efficient navigation. First, we propose a high-level planning representation that can be learned from previous plans considered in the environment and used online during hierarchical, multi-query robot navigation. Second, we propose a planner for collaborative multiagent navigation in an uncertain environment; the approach uses macro-actions and value function approximations to maintain computational tractability. Third, we develop a robust hierarchical planning system to enable the deployment of the collaborative multiagent planner on a real-world team navigating in a structured, uncertain outdoor environment. Finally, we develop a method for learning uncertainty-aware, single agent value function-based approximations from graph data to increase the efficiency of the collaborative multiagent planner.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157816</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Cleavable Monomers and Cross-Linkers for the Synthesis of Degradable Polymer Architectures</title>
<link>https://hdl.handle.net/1721.1/157815</link>
<description>Design of Cleavable Monomers and Cross-Linkers for the Synthesis of Degradable Polymer Architectures
Cardoso da Costa, Leticia
Degradable materials, with different chemical compositions and various polymer architectures, are desirable for countless purposes, ranging from biological applications to recyclability of plastic waste. The creation of brand-new materials with useful, desired properties and built-in degradability is, however, very difficult. The introduction of labile bonds to already known polymers offers, thus, a much simpler approach for the manufacture of degradable materials. Here, we report the design of cleavable monomers and cross-linkers for the synthesis of degradable materials with different polymer architectures. The first half of this thesis focuses on the design of new degradable bottlebrush and brush-arm star polymers (BASPs) via ring-opening metathesis polymerization (ROMP). A brief introduction to the recent advances of bottlebrushes and related nanoarchitectures as a promising carrier platform is provided, followed by the current efforts to impart degradability within the nanoparticle in order to modulate its drug release and clearance rate (Chapter 1). After the introduction, we present the synthesis of boronic ester-crosslinked BASPs that selectively disassemble into bottlebrush fragments upon exposure to hydrogen peroxide, which is often elevated in diseased tissue microenvironments. The H2O2-induced disassembly of spirocyclohexyl nitroxide (chex)-containing BASPs induces a change in transverse magnetic relaxivity that can be detectable via magnetic resonance imaging (MRI) (Chapter 2). In the next chapter, we present the synthesis of backbone-degradable bottlebrush polymers via the co-polymerization of drug-loaded norbornenemacromonomers with a library of tailored silyl ether-based olefins via ring-opening metathesis polymerization (ROMP). The difference in backbone degradation rates, imparted by the silyl ether substituents, leads to different drug release profiles and therapeutic efficacy in vitro (Chapter 3). The second half of this thesis focuses on the introduction of degradable bonds into the polymer backbone of vinylic thermosets via radical ring-opening polymerization (rROP). A brief introduction to the current strategies utilized to impart chemical deconstruction to cross-linked polymer networks prepared by radical polymerization is presented (Chapter 4). Lastly, we improve the performance of a consumer good, gel nail polish, by imparting degradability via co-polymerization with a cleavable comonomer. Gel nail polishes, UV-curable (meth)acrylic coatings, display superior mechanical and adhesive properties compared to alternative nail polishes. These properties, however, come at the expense of ease-of-removal. Here, a cleavable bond is introduced into the resulting cured polymer networks via co-polymerization with a cleavable comonomer. This approach does not impact the material’s properties while enabling easy and fast removal under triggered deconstruction (Chapter 5).
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157815</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging the Properties of Aprotic Solvents Towards Efficient Electrocatalytic Carbon Dioxide Reduction</title>
<link>https://hdl.handle.net/1721.1/157814</link>
<description>Leveraging the Properties of Aprotic Solvents Towards Efficient Electrocatalytic Carbon Dioxide Reduction
Chu, An T.
Electrochemical carbon dioxide reduction has been studied as a method to sustainably produce valorized hydrocarbons. However, the reaction faces two challenges: low reaction selectivity towards value added products, and the deleterious reaction of carbon dioxide with the electrolyte to form soluble carbonate species. While both issues are sensitive to the composition of the electrolyte, the reaction has been exhaustively studied in aqueous electrolytes with limited opportunities for further extensive tunability. This thesis describes approaches for overcoming low reaction selectivity and electrolyte carbonation using aprotic-solvent based electrolytes. We leverage the unique solvation environments and equilibrium acidities accessible in such media to overcome key limitations to reaction performance that are intrinsically linked to the use of aqueous electrolytes. We demonstrate key principles for tuning aprotic solvent-based electrolytes towards improving carbon dioxide electroreduction catalysis, establishing the foundation for the development of advanced electrolyte designs.&#13;
&#13;
Chapter 1 details the development of a dimethyl sulfoxide / acetic acid electrolyte which can engender selective carbon dioxide reduction with minimal electrolyte carbonation on gold cathodes. We demonstrate that the key to engendering these balance of properties entails operating an electrolyte with a low water content, with simultaneous usage of a buffer which is non-nucleophilic and whose pKa is matched to the carbon dioxide / bicarbonate equilibrium. Under such conditions, the selectivity to carbon monoxide can be driven as high as 90% with only millimolar equilibrium bicarbonate formation: a compromise difficult to achieve in water.&#13;
&#13;
Chapter 2 details the discovery of a new mechanism for ethylene electrosynthesis on copper catalyst using dimethyl sulfoxide / phenol electrolyte. Starting from carbon monoxide—a crucial intermediate in the carbon dioxide reduction pathway—we present kinetic evidence that radically altering the solvent environment and proton donor can enable a mechanism involving quasi-equilibrium proton and electron transfer steps prior to a late rate-determining step. By demonstrating that the pathway in dimethyl sulfoxide / phenol has a potential-rate scaling and acid order distinct from those in aqueous electrolytes, we establish a new tunable platform for enabling selective electrocatalysis of hydrocarbon products.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157814</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developments in THz Polaritonics: Towards Integrated Nonlinear THz Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/157813</link>
<description>Developments in THz Polaritonics: Towards Integrated Nonlinear THz Spectroscopy
Sung, Eric Rueyhao
The terahertz (THz) polaritonics platform is a compact, waveguide-based platform for the generation, manipulation, and detection of THz waves. The platform uses thin (&lt;100 μm) lithium niobate (LiNbO₃, LN) and lithium tantalate (LiTaO₃, LT) slabs, which can be patterned to control THz propagation. One of the unique features of the platform is that the THz fields can be imaged directly within the slab with subwavelength spatial resolution and subcycle temporal resolution. Both the amplitude and phase of the fields are recorded, which allows the full spatiotemporal evolution of the fields to be visualized. This makes the platform appealing for compact, waveguide-based THz experiments. The work in the thesis aims to develop tools to enable robust, compact THz spectroscopy using the polaritonics platform.&#13;
&#13;
The first phase of my research aims to develop methods for enhanced THz generation in the waveguides. In a typical polaritonics experiment, the optical pump light is focused to a single line which launches THz fields with electric field strengths of approximately 10 kV/cm. Although the fields are sufficiently strong for THz imaging, any nonlinear spectroscopic applications would require the use of much larger THz fields so that the much weaker THz transients that result from multiple interactions with the sample could be reliably detected. To this end, I developed two methods. The first method uses thin LN waveguides with a beveled edge for enhanced narrowband THz generation. The optical pump light is focused onto the bevel, after which it refracts and becomes confined within the waveguide by total internal reflection. This allows the pump beam to repeatedly drive the generated THz field during its multiple back-and-forth traversals within the LN slab. Using this method, we observe a 10-fold enhancement of the THz spectral amplitude at the velocity-matched frequency. The second method combines the tilted pulse front geometry with THz focusing to generate a strong THz field in the time domain. A circular stair-step "echelon" mirror is used to shape the pump pulse into a conical tilted pulse front composed of a series of concentric rings of pump light. When the pump rings are imaged onto a thin LT waveguide, coherent superposition of the focusing THz fields excited individually by each pump ring results in a dramatically enhanced THz field at the focus. When optimized, the method generates THz fields with electric field strengths up to 175 kV/cm, which is roughly 20x larger than what is generated by a single line of pump light.&#13;
&#13;
The second phase of my research focuses on methods for expanding the polaritonics toolset for spectroscopic applications. Previous experiments coupling the THz phonon-polaritons in a LN waveguide to the quasi-antiferromagnetic magnon mode in an adjacent slab of ErFeO₃ took advantage of the fact that both materials have similar refractive indices. Furthermore, the ErFeO₃ layer complicates THz imaging because it strongly absorbs the optical probe light. I investigated two experimental geometries to address these concerns. The first geometry uses a high-reflecting coating sandwiched between the LN slab and the sample material. The coating is designed to reflect the optical probe light, which enables THz imaging in LN by preventing the probe light from entering the sample and greatly expands the range of possible samples. The second geometry uses a slot waveguide to localize the THz field within a low-index slot region, which results in much stronger interactions between the THz fields and a sample inserted into the slot. Using this geometry, the linear THz absorption spectrum of a test sample was measured with good sensitivity and the complex dielectric function was recovered.&#13;
&#13;
The work presented here describes methods for enabling robust integrated THz spectroscopy in the polaritonics platform. The methods, when combined, should also form the basis for future polaritonics experiments that interrogate the nonlinear THz responses of materials.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157813</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring and Exploiting Ribonuclease 1: from Protein Biochemistry to Protein Engineering</title>
<link>https://hdl.handle.net/1721.1/157812</link>
<description>Exploring and Exploiting Ribonuclease 1: from Protein Biochemistry to Protein Engineering
Wralstad, Evans Christian
Ribonuclease (RNase) 1 is a human protein with a remarkable ability to indiscriminately hydrolyze RNA. RNase 1 and its bovine homologue RNase A exhibit ubiquitous expression across tissues, a catalytic efficiency within the diffusion-limited regime, and minimal substrate sequence requirements. RNase A has been a favorite model protein of biochemists for over half a century; due to the high level of sequence conservation between RNase A and RNase 1, many observations made for RNase A have corollaries for RNase 1.&#13;
&#13;
RNase 1 and RNase A are members of the pancreatic-type ribonuclease (ptRNase) superfamily, a class of enzymes which share many biophysical features, including a small molecular weight, high cationicity, and a secretory nature. Historical elucidation of ribonuclease biochemistry describes their susceptibility to oxidation-induced inactivation. This raises the question: how are these secretory enzymes able to preserve catalytic competency in oxidatively challenging extracellular environments such as blood serum and even epidermal skin?&#13;
&#13;
In Chapter 2 of this thesis, the intrinsic antioxidative capacity of RNase 1 is described. Chemical biology and biomimetic techniques corroboratively implicate two methionine residues as sacrificial antioxidants to protect the enzymic active site, allowing catalysis to persist in the presence of reactive oxygen species. In silico studies suggest evolutionary patterns to install these antioxidative features across the ptRNase superfamily. Sulfur–arene interactions appear to tune the reactivity of methionine residues in a manner consistent with rates of oxidation. These findings highlight an underappreciated role for methionine—to protect catalytic histidine residues—and indicate a means by which ptRNases remain functional in oxidatively challenging physiological environments.&#13;
&#13;
The desirable biophysical features of RNase 1 and the wealth of biochemical knowledge regarding it have also made it a favored model system of protein engineers, as exemplified by RNase S and cyclic RNase-based zymogens, two systems which reversibly attenuate ribonucleolytic activity. In particular, RNase-based zymogens can be activated by exogenous proteases; this schema has biotherapeutic potential, as demonstrated by zymogens which activate in response to viral infection and exert cytotoxic ribonucleolytic activity.&#13;
&#13;
Efforts to establish a zymogen directed toward the coronavirus SARS-CoV-2 are described in two parts of this thesis. In Chapter 3, the main protease 3CLpro of SARS-CoV-2 is enzymologically characterized. This work clarifies reported inconsistencies in enzymological features of this key viral protease and relies on a non-Michaelis–Menten, Bayesian inference-based analytical technique to circumvent some of the causes of the inconsistent prior reports. Then, in Chapter 4, the newfound knowledge of 3CLpro enzymology is applied toward the design of an RNase 1-based, 3CL superscript pro - directed zymogen. The zymogen is inactivated by steric occlusion and conformational distortion of the active site, and site-specific activation by 3CL superscript pro results in a multi-order of magnitude increase in ribonucleolytic activity. 3CL superscript pro action upon the zymogen leads to ribonucleolytic turnover of a fluorescent RNA substrate by the activated species, affording signal amplification that enables detection of nanomolar 3CLpro concentrations in a timeframe comparable to rapid antigen detection testing.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157812</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Market-Based and Policy-Based Conditional Demand Forecaster for Airline Revenue Management</title>
<link>https://hdl.handle.net/1721.1/157811</link>
<description>Market-Based and Policy-Based Conditional Demand Forecaster for Airline Revenue Management
Lu, Yuxuan
The price transparency brought by the proliferation of online travel agencies and the loosening of fare class restrictions increases the importance of competitive pricing in the practice of airline revenue management. In this dissertation, we propose a novel forecasting framework, market-based and policy-based conditional demand forecaster (MPCF) to provide airline revenue management systems (RMSs) with dynamically adjusted sell-up probabilities and fare class demand forecasts based on estimated total market demand (market-based) and predicted fare class availabilities of competitors (policy-based) for a future flight departure.&#13;
&#13;
In the MPCF framework, an airline estimates the total market demand for travel on a future departure day and predicts its competing airline’s future fare class availabilities. The estimation of the total market demand allows the forecasting airline to anticipate additional demand when it offers lower fare quotes than its competitors and vice versa; the prediction of a competitor’s policy enables the revision of expected passenger sell-up probabilities to higher fares, conditioned on competitive influences and assumed passenger choice behaviors.&#13;
&#13;
In simulation tests, we assumed a Bertrand-Edgeworth passenger choice model, corresponding to a fully undifferentiated fare environment. With an assumption of perfect knowledge of total market demand that has already arrived for a future departure date, MPCF gains 12.69% in revenue compared with the existing Q-forecasting in an isolated market. Without that perfect knowledge, which requires estimation of total market demand, MPCF still gained 6.23% of revenue. MPCF leads to higher revenue gains on departures with demand levels that are different from the mean historical demand, demonstrating its benefits in providing dynamic sell-up and forecast guidance according to the predicted policies of competitors. The simulation results confirm the benefits of constructing demand forecasts on predicted market demand and competitors’ policies.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157811</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A GPU-Enabled Building Block Flow Model for Computational Fluid Dynamics</title>
<link>https://hdl.handle.net/1721.1/157810</link>
<description>A GPU-Enabled Building Block Flow Model for Computational Fluid Dynamics
Costa, Samuel Thomas
Computational Fluid Dynamics (CFD) is an key tool in the design of aircraft, allowing engineers to predict the performance of a configuration without having to conduct expensive physical tests. However, in order to move to a greater reliance on CFD, the industry requires a high level of accuracy and fast turnaround time, which current methods cannot deliver. In recent years, the rapid development of the GPU industry has led to an explosion of computational power with the GPU architecture. This has allowed wall-modeled large eddy simulation (WMLES), a higher fidelity simulation technique, to become practical for industry use. WMLES requires the use of both a sub-grid scale (SGS) model and a wall model in order to close the system of equations for integration. Although WMLES delivers an improvement over previous methods, classical SGS and wall models do not deliver the accuracy required by the aviation industry. To help close this gap, we introduce a GPU-compatible version of the Building-Block Flow Model (BFM), a machine learning based unified sub-grid scale and wall model for LES introduced in [1]. In this thesis, we discuss the implementation of the BFM for GPU, timing of the BFM versus other closure models for WMLES, and a variety of tests with the BFM designed to evaluate its performance, and possible avenues of improvement.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157810</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Practical Engineering Design Optimization with Computational Graph Transformations</title>
<link>https://hdl.handle.net/1721.1/157809</link>
<description>Accelerating Practical Engineering Design Optimization with Computational Graph Transformations
Sharpe, Peter D.
Multidisciplinary design optimization has immense potential to improve conceptual design workflows for large-scale engineered systems, such as aircraft. However, despite remarkable theoretical progress in advanced optimization methods in recent decades, practical industry adoption of such methods lags far behind. This thesis identifies the root causes of this theory-to-practice gap and addresses them by introducing a new paradigm for computational design optimization frameworks called code transformations. Code transformations encompass a variety of computational-graph-based scientific computing strategies (e.g., automatic differentiation, automatic sparsity detection, problem auto-scaling) that automatically analyze, augment, and accelerate the user’s code before passing it to a modern gradient-based optimization algorithm. This paradigm offers a compelling combination of ease-of-use, computational speed, and modeling flexibility, whereas existing paradigms typically make sacrifices in at least one of these key areas. Consequently, code transformations present a competitive avenue for increasing the adoption of advanced optimization techniques in industry, all without placing the burden of deep expertise in applied mathematics and computer science on end users. The major contributions of this thesis are fivefold. First, it introduces the concept of code transformations as a possible foundation for an MDO framework and demonstrates their practical feasibility through aircraft design case studies. Second, it implements several common aircraft analyses in a form compatible with code transformations, providing a practical illustration of the opportunities, challenges, and considerations here. Third, it presents a novel technique to automatically trace sparsity through certain external black-box functions by exploiting IEEE 754 handling of not-a-number (NaN) values. Fourth, it proposes strategies for efficiently incorporating black-box models into a code transformation framework through physics-informed machine learning surrogates, demonstrated with an airfoil aerodynamics analysis case study. Finally, it shows how a code transformations paradigm can simplify the formulation of other optimization-related aircraft development tasks beyond just design, exemplified by aircraft system identification and performance reconstruction from minimal flight data. Taken holistically, these contributions aim to improve the accessibility of advanced optimization techniques for industry engineers, making large-scale conceptual multidisciplinary design optimization more practical for real-world systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157809</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Vehicle, Payload, and Trajectory Optimization Framework for Highly-Coupled Aircraft Systems</title>
<link>https://hdl.handle.net/1721.1/157808</link>
<description>An Integrated Vehicle, Payload, and Trajectory Optimization Framework for Highly-Coupled Aircraft Systems
Dewald, Annick J.
A class of highly-coupled aircraft systems is identified in Earth observation applications, where the aircraft design couples tightly with the science instrument design and the operation of both the aircraft and science payload. This dissertation identifies an opportunity to simultaneously optimize the aircraft platform, the science payload, and the operational strategy under one system-level objective function to improve the performance of the total aircraft system. This approach extends the field of MDO which demonstrates that simultaneously optimizing all the subsystems within a larger system allows the optimizer to leverage the couplings between disciplines, rather than be subject to them, resulting in better performance outcomes [1]. The inclusion of the instrument and trajectory into the optimization problem introduces additional objectives related to the science mission needs. While many methods for multi-objective optimization exist in the field, these methods are not tractable with the many objectives within these complex systems. A methodology is proposed to explore trade-offs between multiple objectives by sweeping through different combinations of weighting terms in a weighted-sum objective function to find Pareto optimal design points across the design space. These design points are then evaluated within the objective space, a hyperspace where each axis corresponds to a different objective, to understand the performance capabilities with respect to each objective and evaluate trade-offs between objectives. Findings from this objective space exploration can then be communicated to the science stakeholder to find the design that is best capable of meeting the identified science mission needs. This dissertation then applies this methodology to a series of case studies on a representative science mission. The science mission objective of these case studies is to reduce uncertainty in predictions of sea-level rise by understanding ice mechanics that drive ice shelf collapse and destabilize previously grounded glaciers.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157808</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relationship between synoptic scale meteorology, aircraft&#13;
parameters, and observable contrails</title>
<link>https://hdl.handle.net/1721.1/157807</link>
<description>Relationship between synoptic scale meteorology, aircraft&#13;
parameters, and observable contrails
Barbosa, Maria Paula
Long-lasting or "persistent" contrails are line-shaped clouds that form when airplanes fly through cold and humid parts of the atmosphere that are ice-supersaturated. Various studies have shown that persistent contrails may be responsible for more than half of aviation’s radiative forcing [1]. Efforts to mitigate persistent contrail formation include operational contrail avoidance. Current research suggests that minor (∼ 2000 ft) deviations in altitude of flights during cruise, in conjunction with advancing engine technologies, have the potential to reduce contrail climate forcing by approximately 90% [2]. Identifying and attributing observed contrails to specific individual flights is necessary to demonstrate the success of flight deviations. Reliable flight attribution, therefore, is critical in verifying large-scale implementation of contrail avoidance strategies. Flight attribution leverages both Earth-observation methods, such as satellite images and weather data, and flight data. However, temporal and spatial "blindspots" in satellite instruments, coupled with uncertainties in wind fields, have hindered reliable flight attribution. In this work, we consider eight different probabilistic flight attribution algorithms. All algorithms rely on the use of "similarity measures" which we define as the differences in distance, heading, and altitude between a contrail and flight line segment candidates. We define two-dimensional (2D) algorithms as those that use only distance and heading difference measures and the ones that additionally include altitude as three-dimensional (3D) algorithms. The probabilistic aspect of all eight algorithms is intended to account for errors in wind data and relies on the calculation of a Gaussian probability density function for each similarity measure. In an attempt to mitigate wind and positional errors that compound over time, four of the algorithms feature the inclusion of contrails from previous timestamps as potential match candidates. To account for the changes in flight path due to temporal factors, four of the algorithms include the use of time-dependent Gaussian parameters. The inputs to all algorithms include contrail detections, weather data, and flight data. To perform this analysis, a dataset of 180 manually-attributed, unique contrails was created that captures regional (across the continental United States) and diurnal variation. Each contrail was tracked for part of its lifetime, which results in the generation of 1980 total attributions. These attributions were created by seven labelers, with some overlapping scenes. A parameter sweep was performed on the four 2D algorithms to determine locally optimal Gaussian parameters. This sweep was performed on a reduced dataset that consists of 32 unique contrails and 218 total labels. The results of this sweep show that the performance of the algorithms, when using optimal Gaussian parameters, range from 79.7% to 83.6% accuracy. Accuracy is defined as the percentage of contrails that were attributed to the correct flights. These results are solely for the 2D algorithms that were analyzed on the reduced dataset. We then applied the "locally" optimal Gaussian parameters from the four 2D algorithms to the respective 3D algorithms and ran all eight algorithms on the remaining 148 contrails (1762 labels). We find that the optimal performance for all eight algorithms ranges from 68.2% to 76.2%. A deeper analysis is also conducted to evaluate the scene conditions that affect algorithm performance.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157807</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Opportunities and Limitations of Earth Observation Technology for Environmental Justice Advocacy: A Case Study of Toxic Prisons in the U.S.</title>
<link>https://hdl.handle.net/1721.1/157806</link>
<description>Opportunities and Limitations of Earth Observation Technology for Environmental Justice Advocacy: A Case Study of Toxic Prisons in the U.S.
Ovienmhada, Ufuoma
People of color and other socio-economically marginalized groups in the United States experience a disproportionate burden of environmental challenges such as air pollution and extreme heat; the Environmental Justice (EJ) movement aims to combat these burdens and promote collective well being. Earth Observation (EO) technology, such as satellites, can be used to monitor air quality, extreme heat, and other quantities relevant to EJ. However the application of this technology in measuring EJ, or supporting EJ advocacy efforts has not been widely explored. Satellite EO systems also historically have not been designed with EJ end users in mind. This application is increasingly more pressing as space agencies like NASA are seeking information on how their data can be used to support underserved communities. This dissertation brings together EO data science, systems engineering, and community- engagement to elucidate opportunities and limitations of Earth Observation Technology for Environmental Justice Advocacy. The dissertation is organized into three categories of contributions – Description, Evaluation, and Design/Prescription – that are each composed of multiple research efforts.&#13;
&#13;
In Description, I apply a three-pronged approach to provide insights on the opportunities and limitations of EO data for EJ. First, along with a team of researchers, I assess peer- reviewed literature on satellite data for environmental justice through a scoping review. The second contribution of this chapter is an interview study with a subset of grassroots EJ actors about how they can use EO data in their domain of EJ activism which contests the exposure of prisons and incarcerated people to environmental hazards. The third contribution of this chapter is a system’s engineering architectural description of NASA’s current satellite EO for EJ ecosystem. Using justice theory as an analytical framework, I reveal limitations of NASA’s current EO for EJ architecture for advancing holistic notions of EJ.&#13;
&#13;
In Evaluation, with support from co-authors, I measure spatiotemporal patterns of air pollution burden, and air and land surface temperature extremes in prison landscapes across the U.S. These studies contribute to a nascent literature documenting empirical evidence of environmental hazards in carceral landscapes. It also extends the literature on applications of satellite-derived and modeled geospatial data for EJ. In Design/Prescription, first, supported by 3 years of community engagement with prison EJ activists, I present the Design of a GIS decision support system that features EO data responding to expressed needs of prison EJ activists. Then, I present two essays that Prescribe recommendations for methodological innovations in the design and application of EO technologies and geospatial data for EJ advocacy.&#13;
&#13;
Together, these three chapters demonstrate the immediate relevance of EO and geospatial technologies for prison EJ advocacy, and broader implications for the EO community interested in supporting the aims of the EJ movement more holistically.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157806</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Site-Selective Anion Exchange in a Palladophosphorane</title>
<link>https://hdl.handle.net/1721.1/157805</link>
<description>Site-Selective Anion Exchange in a Palladophosphorane
Khuichad, Nichakan
Reported here are studies on the chemoselective ligand substitution at a palladophosphorane possessing two potential sites of chloride substitution. Ligation of palladium(II) chloride with a tridentate chelating ligand (L, P(N(o-N(2-pyridyl)C₆H₄)₂) results in formation of a complex comprising a d⁸ square planar palladium center supported by a geometrically constrained chlorophosphorane (PdClL superscript Cl). The complex&#13;
thus formed was studied for ligand substitution reactions of the chloro ligand at Pd and P, respectively. Treatment with phenol resulted in substitution of the chloride at the P center while the chloride of the Pd stayed intact, giving complex PdClL superscript OPh. Relatedly, treatment with AgF provided a compound whose NMR spectra are consistent with formation a P–F containing pallado-phosphorane PdClL superscript F. However, attempt to recrystallize the fluoride complex resulted in a formation of a cationic complex with a fluoride-bridged species instead although the fluoride still resided between the two phosphorus centers. Overall, substitution experiments of this palladophosphorane indicated a preference for P–Cl substitution over Pd–Cl. The driving force for the favor toward the exchange at phosphorus has not been extensively explored, but hypotheses have been made which may entail the concept of hard-soft acid-base chemistry and the strength of the bonds involved.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157805</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay of Transition Metals and Noncovalent Interactions in C–H Activation Catalysis</title>
<link>https://hdl.handle.net/1721.1/157804</link>
<description>Interplay of Transition Metals and Noncovalent Interactions in C–H Activation Catalysis
Vennelakanti, Vyshnavi
Selective C–H activation is crucial for the synthesis of bioactive molecules and natural products, and plays an important role in pharmaceutical industry, medicinal chemistry, and the materials industry. While synthetic routes to activate unreactive C–H bonds require harsh conditions and usually show poor selectivity, biological systems, such as non-heme iron enzymes, carry out selective C–H activation efficiently under ambient conditions. These enzymes catalyze a variety of reactions including C–H halogenation, hydroxylation, epoxidation, and ring closures, several of which are mediated with the help of noncovalent interactions such as hydrogen bonds (HBs). Most reactions share a common catalytic pathway with the formation of a reactive ferryl intermediate which is hard to be characterized experimentally. Computational studies of these enzymes help to bridge the gap in experiments towards understanding enzyme mechanism and selectivity.&#13;
&#13;
In this thesis, we study the interplay of noncovalent interactions and transition metals in C–H activation catalysis using quantum mechanical simulations. We employ density functional theory (DFT) and wavefunction theory to perform an extensive computational study of protein HB interactions and transition metal complex (TMC) active sites in non-heme iron halogenases and hydroxylases. Due to the fleeting nature of the ferryl intermediate, experimentalists tend to use vanadyl mimics in order to better understand the ferryl intermediate. However, these metals exhibit distinct electronic structure, motivating us to investigate if vanadyl mimics are indeed faithful to the native ferryl intermediates. Studying the mechanism of metalloenzymes using first principles methods could be challenging due to the larger system sizes. Thus, we also try to understand C–H activation carried out by 3d TMCs focusing on the specific case of partial oxidation of methane to methanol. While the oxidation and spin states of the metals in the enzyme active site are well defined through spectroscopic methods, that is not the case with TMC catalysts. Thus, modeling TMC catalysts is accompanied by the twin challenges of identifying the ground spin state and determining the appropriate method to identify the ground state since properties such as reaction energies and scaling relations are sensitive to the computational method used. Additionally, the ability of TMCs to exist in multiple spin states is often leveraged for practical applications, with one such example being spin crossover (SCO) complexes that exhibit a change in spin state as a function of external stimulus like temperature and are widely studied due to their increasing use in molecular switches. We curate an experimental data set of 95 Fe(II) SCO complexes and predict SCO behavior using DFT with the aim of identifying the best performing functional. This in turn sets the stage to design SCO complexes with tailed properties such as those that exhibit SCO behavior at room temperature. We expect that the insights from this work can directly guide efforts on biomimetic chemistry as well as both biological and synthetic C–H activation catalysis.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157804</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for a town hall</title>
<link>https://hdl.handle.net/1721.1/157785</link>
<description>Design for a town hall
Baker, Charles M.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157785</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A steam fire engine house</title>
<link>https://hdl.handle.net/1721.1/157784</link>
<description>A steam fire engine house
Chamberlin, William E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157784</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The experimental working by wet and dry methods, of a low grade silver and gold ore from Newburyport, Mass.</title>
<link>https://hdl.handle.net/1721.1/157783</link>
<description>The experimental working by wet and dry methods, of a low grade silver and gold ore from Newburyport, Mass.
Wood, F. W.
            (Floyd William),
            1926-
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157783</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Swain Turbine</title>
<link>https://hdl.handle.net/1721.1/157782</link>
<description>The Swain Turbine
Barrus, George Hale.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157782</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Corliss Steam Engine</title>
<link>https://hdl.handle.net/1721.1/157781</link>
<description>The Corliss Steam Engine
Pond, Frank H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157781</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental treatment of side products formed in smelting the silver lead ore of Newburyport, Mass.</title>
<link>https://hdl.handle.net/1721.1/157780</link>
<description>Experimental treatment of side products formed in smelting the silver lead ore of Newburyport, Mass.
Baldwin, G. J.
            (George Johnson); Hibbard, Henry D. 1856-
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157780</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Port Henry Iron Industry</title>
<link>https://hdl.handle.net/1721.1/157779</link>
<description>The Port Henry Iron Industry
Allen, C. F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157779</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>History of musical pitch and its present condition in Boston</title>
<link>https://hdl.handle.net/1721.1/157778</link>
<description>History of musical pitch and its present condition in Boston
Miller, Wm. T.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1880
</description>
<pubDate>Thu, 01 Jan 1880 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157778</guid>
<dc:date>1880-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiments with Holtz machine</title>
<link>https://hdl.handle.net/1721.1/157777</link>
<description>Experiments with Holtz machine
Mixter, S. J.,
            1855-1926.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157777</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mean specific gravity of the earth</title>
<link>https://hdl.handle.net/1721.1/157776</link>
<description>The mean specific gravity of the earth
Henck, J. B.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157776</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Newton Water Works</title>
<link>https://hdl.handle.net/1721.1/157775</link>
<description>The Newton Water Works
Plimpton, A. L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157775</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for iron railway bridge</title>
<link>https://hdl.handle.net/1721.1/157774</link>
<description>Design for iron railway bridge
Nichols, E. J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157774</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for an iron railway bridge, with a consideration of the principles determining the design</title>
<link>https://hdl.handle.net/1721.1/157773</link>
<description>Design for an iron railway bridge, with a consideration of the principles determining the design
Swain, Geo. F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157773</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dover Street Draw-Bridge</title>
<link>https://hdl.handle.net/1721.1/157772</link>
<description>Dover Street Draw-Bridge
Stewart, Charles E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157772</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The development of the side-rod locomotive</title>
<link>https://hdl.handle.net/1721.1/157771</link>
<description>The development of the side-rod locomotive
Voelcker, J. Westgarth.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1923; Includes bibliographical references (leaf [86]).
</description>
<pubDate>Mon, 01 Jan 1923 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157771</guid>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical evaluation and correlation of tool-life data</title>
<link>https://hdl.handle.net/1721.1/157770</link>
<description>Critical evaluation and correlation of tool-life data
Colding, Bertil N.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Bibliography: leaves 46-47.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157770</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A case study of two autopilot design methodologies : linear quadratic and H-infinity for a tail controlled missile</title>
<link>https://hdl.handle.net/1721.1/157769</link>
<description>A case study of two autopilot design methodologies : linear quadratic and H-infinity for a tail controlled missile
Edeburn, Mark Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1993; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157769</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal pricing for peak loads and joint production : theory and applications to diverse conditions.</title>
<link>https://hdl.handle.net/1721.1/157768</link>
<description>Optimal pricing for peak loads and joint production : theory and applications to diverse conditions.
Chernick, Paul Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 222-234.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157768</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of electromagnetic interference fringes for two-layer media.</title>
<link>https://hdl.handle.net/1721.1/157767</link>
<description>Evaluation of electromagnetic interference fringes for two-layer media.
Chew, Weng Cho.
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157767</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toeplitz operators</title>
<link>https://hdl.handle.net/1721.1/157766</link>
<description>Toeplitz operators
Gencarelli, Frank Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1977; Bibliography : leaf 45.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157766</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A scientific academy</title>
<link>https://hdl.handle.net/1721.1/157765</link>
<description>A scientific academy
Eaton, Charles S.,
            1838-1896.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157765</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wearable Gut and Brain Interfaces for Valence Detection and Modulation</title>
<link>https://hdl.handle.net/1721.1/157740</link>
<description>Wearable Gut and Brain Interfaces for Valence Detection and Modulation
Vujic, Angela
Emotion detection interfaces have shown promise in mediating our emotional health through improved diagnosis, self-tracking, social support systems, mindfulness, and biofeedback training; however, the most popular methods falter when distinguishing between positive and negative valence - or lead to privacy or social issues. Brain and gut interfaces can serve as an alternative, but often require complex setups with many electrodes, large datasets, and the usage of significant training to achieve benchmark emotion detection performance. I present novel, wearable gut- and brain-interfaces for valence detection and modulation that can be made feasible with as few as two electrodes, minimal training and statistical analysis. I coin and define the area of gut-brain computer interfacing (GBCI), while further developing the field of affective brain-computer interfacing (aBCI). I take a novel approach by using the stomach signal and motivational direction models as an alternative to traditional affective modalities and models. I present Joie, a joy-based electroencephalography (EEG) brain-computer interface (BCI); JoyNet, a neural network for joy detection with EEG; and KALM, an EEG, electrodermal activity (EDA) and respiration rate multimodal fusion model. I also present Serosa, an novel electrogastrography (EGG) GBCI which non-invasively records indices of gastric neurons that can be correlated with emotional states and provide a new affect detection modality. This thesis presents findings and innovations in research and application: first, offline affect detection models which contextualize neural with embodied modalities and evaluate how each signal influences affect detection performance. Second, novel real-time interfaces are implemented and evaluated with placebo-controlled laboratory studies. Third, I present a novel neuroethics discussion which uses socioecological models to anticipate harms and I reflect on the works in this thesis.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157740</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secure Computation in Decentralized Systems</title>
<link>https://hdl.handle.net/1721.1/157739</link>
<description>Secure Computation in Decentralized Systems
Zyskind, Guy
Decentralized systems like Bitcoin and Ethereum are real-world examples of secure distributed systems deployed at scale. Over the past decade, these systems and others have proven to provide a trust-minimized solution for computing. They ensure the correct execution of code (correctness), maintain the integrity of stored data, and remain consistently available (availability). Additionally, they allow any user to interact without the risk of censorship.&#13;
&#13;
However, while decentralized systems guarantee security properties like integrity, correctness, and availability, they do not provide privacy. In this regard, they are strictly worse than assuming full trust in a centralized server, since any node in the network must see all data. Furthermore, in many of these open systems (also known as 'permissionless' networks), there are no restrictions on who can operate a node. This means that decentralized systems, and public blockchains in particular, cannot operate on private data, greatly limiting the kinds of use-cases they can support.&#13;
&#13;
This dissertation explores solutions to mitigate the privacy concerns associated with modern decentralized systems, focusing particularly on blockchains. The research employs Secure Multiparty Computation (MPC) techniques to address these issues, demonstrating how MPC, which already shares a similar distributed trust threat model, can enhance privacy in decentralized systems. More specifically, this thesis focuses on the following key areas in decentralized systems:&#13;
&#13;
Access Control Mechanisms and Confidential Smart Contracts: The thesis begins by exploring access control mechanisms on blockchains, and from that builds up to the concept of confidential smart contracts -- arbitrary programs that execute both correctly and privately.&#13;
&#13;
Identity Management and Authentication: Building on access control and confidential smart contracts, we examine identity management and authentication within decentralized networks. We develop a highly efficient Threshold ECDSA protocol that runs in the server-aided MPC model.&#13;
&#13;
Perhaps more importantly, we revisit the server-aided MPC model itself, which sits somewhere between the dishonest and honest-majority MPC paradigms, and show that a confidential smart contract is a real-world realization of the server in this model. We thus theorize that dishonest MPC protocols in general can be practically improved under this model, and argue that because there is a real-world counterpart, this model is realistic.&#13;
&#13;
An Improved Distributed Point Function (DPF) and ORAM: A major theoretical contribution of this work is a novel three-party Distributed Point Function (DPF) construction. This leads to state-of-the-art Oblivious RAM (ORAM) and Distributed ORAM (DORAM) protocols, which are important building blocks in MPC.&#13;
&#13;
Privacy-Preserving Digital Currencies: Using this DPF construction, we revisit the problem of privacy-preserving digital currencies, proposing a solution in the account model. This approach challenges the current consensus that privacy in blockchains requires a UTXO model.&#13;
&#13;
Secure Inference with private retrieval: Lastly, the thesis explores how Large Language Models (LLMs) can perform secure inference while retrieving data from private, distributed databases. This method represents a step towards building secure decentralized AI systems that respect user privacy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157739</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cyborg Psychology: The Art &amp; Science of Designing Human-AI Systems that Support Human Flourishing</title>
<link>https://hdl.handle.net/1721.1/157738</link>
<description>Cyborg Psychology: The Art &amp; Science of Designing Human-AI Systems that Support Human Flourishing
Pataranutaporn, Pat
As Artificial Intelligence (AI) becomes increasingly integrated into our daily lives, understanding the psychological implications of human-AI interaction is crucial for developing systems that truly support human capabilities. This dissertation introduces “Cyborg Psychology,” an interdisciplinary, human-centered approach to understanding how AI systems influence human psychological processes. Cyborg Psychology also emphasizes applying these insights to design and develop AI systems that support human flourishing. Cyborg Psychology recognizes the complex, non-linear interactions between humans and AI, acknowledging that both can influence and shape each other in dynamic and often unpredictable ways. Informed by human-computer interaction, psychology, and behavioral sciences, this dissertation focuses on understanding AI’s impact on crucial cognitive and behavioral processes, including motivation, critical thinking, self-reflection, confidence, beliefs, biases, and more. In addition, the work presents several AI systems that apply psychological insights to support human cognition and behavior. For example, the “Wearable Reasoner” seeks to enhance human rationality, “Personalized Virtual Characters” aims to support learning motivation, and “Future You” is designed to encourage long-term oriented thinking and behavior. Employing a diverse array of research methodologies, this work proposes a framework for investigating the implications of interaction design choices. The ultimate goal is to empower the development of AI systems that foster human flourishing by nurturing intellectual growth, cultivating motivation, stimulating critical thinking, and preserving individual autonomy in decision-making.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157738</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechatronic Design and Evaluation of a Two-Degree-of-Freedom Powered Ankle-Foot Prosthesis with Myoneural Interfacing Capabilities</title>
<link>https://hdl.handle.net/1721.1/157737</link>
<description>Mechatronic Design and Evaluation of a Two-Degree-of-Freedom Powered Ankle-Foot Prosthesis with Myoneural Interfacing Capabilities
Hsieh, Tsung-Han
Recent advancements in neural interfaces and sensing technologies have opened new possibilities for enhanced prosthesis control. The agonist-antagonist myoneural interface (AMI) connects residual muscle pairs to emulate natural dynamics, while electronic osseointegrated prostheses for the rehabilitation of amputees (eOPRA) allow direct measurement of neural signals through implants. Additionally, magnetomicrometry enables precise, real-time measurement of muscle length. These innovations motivate the development of more sophisticated prosthetic designs, including two degrees of freedom (2DoF) ankle systems. &#13;
&#13;
This Ph.D. thesis advanced bionic limb technology through three primary aims. First, a comprehensive characterization study of human-scale actuators was conducted, including brushless motors of different sizes. Using a custom-built dynamometer, the performance of these actuators was evaluated across their full operating range. Building upon this foundation, an innovative bionic ankle-foot prosthesis with enhanced capabilities was designed and fabricated. This advanced prosthetic system achieved biological fidelity in terms of range of motion, torque output, and angular velocity, thus enabling more natural and adaptable gait patterns. To validate the efficacy of the system, a subject with AMI constructs was fitted with the prosthesis and underwent a series of locomotion tasks, including level-ground ambulation and obstacle traversal. &#13;
&#13;
This work pushed the boundaries of bionic limb function and advanced the restoration of natural locomotion after lower limb amputation, providing valuable insights into the potential of combining advanced prosthetic design with neural interfacing techniques.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157737</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the effects of sunlight on the fate of oil spilled at sea</title>
<link>https://hdl.handle.net/1721.1/157736</link>
<description>Quantifying the effects of sunlight on the fate of oil spilled at sea
Freeman, Danielle Haas
Oil spilled at sea is transformed by sunlight-driven photochemical reactions. The transformed oil has different properties and behavior in the environment compared to the fresh oil, resulting in different fates and effects. My work in this thesis was to put numbers on these changes, with the goal of better predicting where oil goes and how it behaves in diverse spill scenarios. First, I focused on how sunlight generates water-soluble compounds from oil, which can lead to the dissolution of oil-derived compounds in seawater (photo-dissolution; Chapter 2). To find out whether photo-dissolution could be an important fate process during an oil spill, I used a combination of experiments and photochemical rate modeling to calculate photo-dissolution rates for the 2010 Deepwater Horizon spill (DwH) in the Gulf of Mexico (GoM). I found that photo-dissolution likely converted ~8% of the floating surface oil to dissolved organic carbon during DwH, a fraction similar in magnitude to other well-recognized fate processes. Moving beyond DwH, I evaluated the sensitivity of oil photo-dissolution and photochemically-altered oil physical properties to temperature. I found that if a spill like DwH had occurred in 5°C water rather than the exceptionally warm 30°C water of the GoM, 7x less oil could have dissolved via photo-dissolution and the viscosity of the remaining insoluble oil could have been 16x higher, resulting in lower entrainment of oil into the water column as small droplets (Chapter 3). The net result is that more oil would stay at the sea surface in a cold-water spill. Finally, I determined photo-dissolution rates for diverse oil products beyond the light crude that spilled during DwH (Chapter 4). I found that oil photo-reactivity could be predicted from oil chemical composition. I also found that photo-dissolution likely affects oil mass balance in spills of light oils forming thin slicks but not in spills of light or heavy oils forming thick slicks. Overall, this work advances our understanding of how oil changes in the environment upon sunlight exposure. This information can be applied to better predict, evaluate, and mitigate the effects of oil spilled at sea on marine ecosystems, including humans.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157736</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tomorrow's Typography</title>
<link>https://hdl.handle.net/1721.1/157735</link>
<description>Tomorrow's Typography
van de Seyp, Vera
This thesis is an exploration for new tools for typography that investigates how emerging (AI) technologies can contribute to the type design practice in a meaningful way. I created computational design experiments focusing on three areas: (A) design automation, (B) interfacing, and (C) creative exploration. A lot of care has been put in understanding the current scene through expert interviews, workshops, talks and surveys. With pose estimation, generative visual AI, and large language models that operate on text, I explore whether typographic shapes can be created and manipulated with different modes of expression, in a playful, intuitive and collaborative way.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157735</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancements in Management Science: Applications to Online Retail, Healthcare, and Non-Profit Fundraising</title>
<link>https://hdl.handle.net/1721.1/157734</link>
<description>Advancements in Management Science: Applications to Online Retail, Healthcare, and Non-Profit Fundraising
Zhai, Chen Wen (Sabrina)
Management science is an evolving-field that requires novel models and algorithms, combining methods from statistics, optimization, and machine learning. This thesis presents advancements in management science across three domains: revenue management, healthcare, and non-profit funding platforms. The chapters in this thesis develop rigorous algorithms and techniques which are relevant in practice, and present data-driven insights into each of the application areas. &#13;
&#13;
Chapter 2 studies a personalized dynamic pricing problem commonly faced by online retailers. Customers arrive sequentially to the selling platform, and for each arrival the seller must make an immediate pricing decision for that customer. The seller aims to learn the demand as a function of price and customer covariates through price experimentation, while simultaneously earning as much total revenue as possible. Previous work on this topic have adopted a classical online learning setup, where the retailer begins the selling horizon with no information about the problem and gains all knowledge about the demand function from the online selling phase. However, this assumption is often not true in practice. Many retailers already possess some information about their product's demand from market research or previous sales data, and not utilizing this information is clearly suboptimal. The chapter develops a novel framework that allows the seller to incorporate historical data on pricing decisions and realized demand, and moreover enables one to study the effect that certain characteristics of this historical dataset have on online selling performance. Using this framework, a dynamic pricing algorithm is proposed which effectively uses both historical and real time data, and achieves provably optimal performance. Furthermore, a new distance measure is developed to quantify how close the historical pricing decisions are to being optimal. Using this distance measure, the chapter shows a surprising inverse relationship between this measure and the achievable online performance.  &#13;
&#13;
Chapter 3 focuses on applying causal inference techniques to study the treatment efficacy of different antibiotics on patients with urinary tract infection. Up to 50% of women will experience a urinary tract infection (UTI) in their lifetime, making it the third most common indication for antibiotic treatment in the United States. Though national treatment guidelines encourage using one of three antibiotics as the first-line treatment, other second-line and alternative antibiotics are still commonly prescribed in practice. Studies on the efficacy of first-line versus second-line and alternative antibiotics for UTI are limited and dated. The chapter presents a retrospective cohort study using the claims database from Independence Blue Cross to determine the relative efficacy and adverse event rates between different categories of antibiotics. By combining causal inference techniques with automated feature extraction using the Observational Medical Outcomes Partnership (OMOP) common data model, evidence is found which supports the use of guideline-recommended first-line treatments for uncomplicated UTI. Specifically, the rate of treatment efficacy is higher for first-line antibiotics relative to alternatives. Surprisingly, the analysis also finds evidence which supports increased efficacy of first line agents relative to second-line antibiotics, which are of broader spectrum, albeit the effect difference is smaller compared to the comparison between first-line antibiotics and alternatives. This large-scale cohort study which includes a comprehensive collection of covariates provides much-needed evidence to support the continued recommendation of first-line drugs for the treatment of UTI. The chapter also suggests the feasibility for performing complex causal inference analyses using automated feature engineering packages for OMOP-formatted datasets.&#13;
&#13;
Chapter 4 studies an online matching problem where sequentially arriving donors must be matched to projects needing funding on peer-to-peer philanthropic crowdfunding platforms such as DonorsChoose.org. Empirical studies have shown that (i) donors have heterogeneous preferences over the projects, and (ii) many return to make more than one donation. Facing such donors, the platform’s aim is to match each donor to one of their preferred projects so as to maximize the total donation without over-funding any projects and without knowing the arrival pattern. Previous work in the literature have not studied the effect of returning donors on algorithm performance. The chapter shows an upper bound on the best achievable worst-case performance of any online algorithm which reveals the relationship between donor return rate and algorithm performance. Furthermore, numerical analysis shows that a simple known algorithm achieves a performance that improves with the number of returning donors without differentiating between the original and return donors. The algorithm is intuitive and straightforward to implement, and the results shed light on the practical value that returning traffic can bring for fundraising platforms.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157734</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Single Bio-molecule Detector Based on CMOS&#13;
Nanofluidic Platform</title>
<link>https://hdl.handle.net/1721.1/157733</link>
<description>Towards a Single Bio-molecule Detector Based on CMOS&#13;
Nanofluidic Platform
Zikrallah, Ahmed S.
Cytokines secretion is a core component of the function of many cell therapy products: It affects the tissue repair capacity of induced Pluripotent Stem Cells (iPSCs) and Mesenchymal Stem cells (MSCs) and the tumorigenicity of Chimeric Antigen Receptor (CAR) T-cell therapies. Ideally, we would be able to continuously monitor the secretome of these cell therapies as they are transformed and expanded in manufacturing.However, state-of-theart techniques for monitoring typically low concentrations of cytokines require either Mass Spectroscopy (MS) or immunoassays like Enzyme-linked Immunosorbent Assay (ELISA). We propose the use of CMOS technology to build a proteomic platform with a single biomolecule resolution. A prototype chip has been designed and fabricated using standard foundary process incorporating a new implementation of a Solid State Nanopore (SSN) of size 55nm×162nm×100nm (w×l×h) with nanofluidic access channels that bridge the buffer solution between the assay space in the packaging structure – a poly carbonate/Polydimethylsiloxane (PDMS) package- and the nanopore on the chip. A silicon Single Photon Avalanche Detectors (SPADs) was also implemented and placed near the nanochannels to utilize fluorescence labeling imaging techniques. In addition, a read-out amplifier that achieves a midband gain of 36.2 dB at a 3 dB bandwidth of 0.1-3.6 MHz is also implemented on the same silicon die, paving the way to superior performance compared to ionic current read-out systems used earlier for electrical biomolecule detection, thanks to low parasitics as a result of integration. The aforementioned modalities integrated on a single chip open the space for the use of CMOS platforms in the electrical and optical interrogation of biomolecules, opening a new horizon for near real-time biomarker assays. The following thesis builds on earlier work that was performed in [1][2] with the objective of expanding on different techniques to interface and characterize the performance of these modalities, especially after post-processing the chips with the aid of tools at MIT.nano. The thesis explores the further deployment of integrated SPAD in a Fluorescence Lifetime Imaging (FLIM) system to image fluorescence-labeled molecules, showcasing the capabilities of the CMOS nanofluidic platform to detect biomarkers such as cytokines.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157733</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laboratory studies of atmospheric photochemistry in indoor and outdoor environments</title>
<link>https://hdl.handle.net/1721.1/157732</link>
<description>Laboratory studies of atmospheric photochemistry in indoor and outdoor environments
Goss, Matthew B.
Secondary organic aerosol (SOA), fine particulate matter formed through indirect photochemical reactions, influences the climate and contributes to air pollution harmful to human health. While these two effects act at different scales, they are governed by similar chemical processes. This work investigates the atmospheric photochemistry of indoor and outdoor environments, giving particular attention to the reactions that lead to SOA formation, notably those involving oxidant and peroxy radical (RO2) chemistry.&#13;
First, this thesis examines the oxidation of dimethyl sulfide (DMS), which represents a large natural source of sulfur to the atmosphere and affects the climate. Using varied chemical conditions across numerous environmental chamber experiments, we characterize aerosol formation from the oxidation of DMS, as well as two related compounds, dimethyl sulfoxide and dimethyl disulfide. We also measure key rate constants crucial to understanding the formation and fate of hydroperoxymethyl thioformate, an important recently-discovered DMS product.&#13;
Second, this work investigates the indoor air quality implications of 222 nm germicidal ultraviolet lamps (GUV222). While these lamps are effective at reducing the spread of airborne pathogens, they lead to the formation of ozone (O3), a harmful air pollutant. Through environmental chamber experiments, we quantify the GUV222-driven production of O3, OH, oxidized products, and SOA, and further demonstrate that GUV222 causes new particle formation. Based on these results, we recommend that GUV222 lights be operated at their lowest effective level.&#13;
Finally, we pivot to examine assumptions embedded within the relationship between chamber experiments and SOA parameterizations in global chemical transport models. We represent historical laboratory experiments in a box model, enabling explicit estimates of the unmeasured RO2 and oxidant chemistry that influences aerosol formation. This work shows that reaction conditions are dynamic, changing within and between experiments, and demonstrates that RO2 isomerization is implicitly built into SOA parameterizations, even without its explicit representation.&#13;
Overall, this thesis connects multiple areas of indoor and outdoor atmospheric photochemistry, improving our understanding of marine organosulfur chemistry, the impacts of GUV222 lamps, and the relationship between laboratory chamber measurements and the modeling of aerosol on a global scale.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157732</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating cofactor transfer for a B₁₂-dependent enzyme</title>
<link>https://hdl.handle.net/1721.1/157731</link>
<description>Investigating cofactor transfer for a B₁₂-dependent enzyme
Duong, Alexander T.
The metallocofactors utilized by enzymes can range in complexity from single metal ions to organometallic cofactors well over 1000 Da. These cofactors enable these metalloenzymes to accomplish a diverse set of unique and challenging chemistry that are critical to core life functions. One of these metallocofactors, adenosylcobalamin (AdoCbl), has only one cognate enzyme in humans: methylmalonyl-CoA mutase (MCM), which is involved in the catabolism of several amino acids, cholesterol, and odd-chain fatty acids. MCM relies on two other proteins, a G-protein metallochaperone called methylmalonic aciduria type A protein (MMAA) and a protein called adenosyltransferase (ATR), to load and off-load cofactor. Mutations or deletions of the gene for MCM, or in any of the genes corresponding to accessory proteins which interfere with cofactor delivery and removal, can lead to a potentially lethal inborn error in metabolism. If the cofactor becomes damaged in the active site of MCM, ATR unloads the cofactor, repairs it, and reloads the regenerated AdoCbl onto the mutase. A molecular understanding of this process has been challenging to obtain due to the difficulty of structurally characterizing a three-protein MCM-MMAA-ATR complex that is transient in nature. An orthologous protein from C. metallidurans in which the G-protein metallochaperone is naturally fused to its target mutase isobutyryl-CoA mutase (IcmF) provides an alternative two-protein IcmF-ATR system for structural and biochemical characterization. Recent work has shown that the IcmF system utilizes a mechanism of active site opening similar to non-fused systems like that of humans. However, the mechanisms by which ATR recognizes the presence of damaged cofactor and then removes it remains unclear. In this thesis, we discuss the development of an assay based on UV-Vis spectroscopy to monitor cofactor transfer between IcmF and ATR. We also discuss efforts to substitute histidine residues in IcmF suspected of serving as intermediate binding sites during cofactor transfer, with the goal of using the developed assay as a means of observing potential changes in transfer efficiency by perturbing these histidine residues. This work seeks to improve our understanding of AdoCbl-dependent enzyme maturation, and inform our ability to harness their unique reactivity.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157731</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Purification Strategies and Monomer Platforms for Ruthenium-Initiated Living Ring-Opening Metathesis Polymerization</title>
<link>https://hdl.handle.net/1721.1/157730</link>
<description>New Purification Strategies and Monomer Platforms for Ruthenium-Initiated Living Ring-Opening Metathesis Polymerization
Kilgallon, Landon J.
Chapter 1: Introduction to Covalent Capture Purification and Ring-Opening Metathesis Polymerization In chemical spaces where difficult purifications are commonplace, innovative purification methodologies have been developed to circumvent the limitations associated with classical physiochemical property-driven purifications (chromatography, crystallization, distillation, etc.). Covalent capture purification—a type of catch-and-release purification—purifies molecules by selectively capturing them (via a covalent bond) onto a solid support, washing away impurities, and cleaving the product from the support for recovery. In the first half of this chapter, we review literature examples where covalent capture has been implemented for the purification of chemically synthesized molecules, including synthetic peptides, oligonucleotides, oligosaccharides, and small molecules. Ruthenium-initiated ring-opening metathesis polymerization (ROMP) remains an extraordinary tool for polymer synthesis due to its functional group tolerance, the ready availability of monomers and initiators, and the overall ease at which well-defined polymers can be rapidly synthesized. However, complete removal of ruthenium residues from the product is a difficult task that is compounded by the lack of understanding of initiator decomposition in ROMP. The existing methods for purification of ROMP polymers, which are typically solubility-based, are reviewed. The promise of covalent capture purification—a reactivity-based purification method— for living ROMP is discussed. Chapter 2: Covalent Capture Purification for Living Ring-Opening Metathesis Polymerization Covalent capture purification, a type of catch-and-release purification, facilitates complex molecule purification by partitioning reaction mixtures based on chemical reactivity rather than physiochemical properties. While this purification methodology has proven highly valuable for the purification of synthetic peptides, oligonucleotides, and oligosaccharides, it has not yet been implemented for the purification of synthetic polymers. Ruthenium-initiated living ROMP remains an extraordinary tool for polymer synthesis, but removal of trace ruthenium from the polymeric product remains a difficult task due to the wide scope of polymer compositions, the lack of a complete understanding of initiator decomposition, and the unknown identities of trace ruthenium products generated during ROMP. In this work, we translate covalent capture purification to living ROMP for the first time, and demonstrate its use as a general purification method for ROMP polymers. The optimized covalent capture system was used to purify a variety of linear polynorbornenes (up to ~7 kDa) in yields ≥49% and high purities (≥99.6% ruthenium removed). Chapter 3: Tricyclononenes and Tricyclononadienes as Efficient Monomers for ROMP: Understanding Structure–Propagation Rate Relationships and Enabling Facile Post-Polymerization Modification Tricyclononenes (TCN) and tricyclononadienes (TCND) represent under-explored classes of monomers for ROMP that have the potential to both advance fundamental knowledge (structure-polymerization kinetics relationships) and serve as practical tools for the polymer chemist (post-polymerization functionalization). In this work, a library of TCN and TCND imides, monoesters, and diesters, along with their exo-norbornene counterparts, were synthesized to compare their behavior in ruthenium-initiated ROMP. To understand the relationship between monomer structure and ROMP propagation rate, density functional theory methods were used to calculate a variety of electronic and steric parameters for the monomers. While electronic parameters (e.g., HOMO energy levels) correlated positively with the measured kp values, steric parameters generally gave improved correlations, which indicates that monomer size and shape are better predictors for kp than electronic parameters for this data set. Furthermore, the TCND diester— which contains an electron-deficient cyclobutene that is resistant to ROMP—and its polymer p(TCND) are shown to be highly reactive toward base-catalyzed conjugate addition with thiols, providing a protecting/activating-group free strategy for post-polymerization modification. Chapter 4: Safe and Scalable Syntheses of N,N-Dimethyltrifluoromethanesulfonamide (DMTMSA) and Other Trifluoromethanesulfonamide Solvents for High Energy Density Battery Applications A simple, scalable synthetic methodology for the synthesis of N,N-dimethyltrifluoromethanesulfonamide (DMTMSA) and other trifluoromethanesulfonamide solvents is described. No specialized glassware is required, water is the solvent, and an ice bath is used for cooling. Up to 155 g of DMTMSA is synthesized in a single batch in 92% yield. The optimized process is highly mass efficient (PMI = 9.1), and excess dimethylamine may be recovered (93% recovery, 51% decrease in waste) and recycled via a simple short-path distillation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157730</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of quorum-sensing circuits for metabolic flux control in Escherichia coli</title>
<link>https://hdl.handle.net/1721.1/157729</link>
<description>Development of quorum-sensing circuits for metabolic flux control in Escherichia coli
Dinh, Christina V.
Metabolic engineering seeks to reprogram microbial cells to efficiently produce value-added chemicals. Traditionally, this is achieved by overexpressing the production pathway and/or knocking out competing endogenous pathways. However, limitations in some pathways are more effectively addressed through dynamic metabolic flux control to favor different objectives over the course of the fermentation. This thesis aims to develop autonomous and pathway-independent regulation tools that can be applied to controlling metabolic fluxes in these contexts to improve production. To this end, quorum-sensing (QS)-based circuits were constructed, characterized, and applied to regulating metabolic fluxes in a cell-density-dependent manner. The first tool is a bifunctional QS circuit in which each control module regulates transcription under circuits derived from different QS systems. Characterization showed that the switching dynamics of both circuits can be tuned by varying the expression level of circuit components. To address major limitations in the naringenin and salicylic acid pathways, one module was used to delay transcription of key heterologous genes to overcome enzyme inhibition and growth burden while the second module controlled expression of CRISPRi components to silence competing endogenous pathways. Application of these regulation schemes resulted in significant production improvements in both pathways. Especially when aiming to dynamically down-regulate enzyme activity, post-translational control can offer faster response dynamics. To develop a post-translational control tool, expression of a protease linker was regulated under a QS circuit, resulting in selective degradation of tagged enzymes. This circuit was applied to regulating phosphofructokinase (Pfk:) levels with the ultimate goal of dynamic composition control in co-culture fermentations. Application of this control circuit in a naringenin-producing co-culture system resulted in improved composition profiles, which benefited production. Finally, a second post-translational control system that co-localizes proteins in response to cell-density changes was constructed and characterized. Such a system can be applied to actuate changes in reaction rates with minimal dependence on host cell machinery. Overall, this work developed QS-based circuits and showed they can be powerful tools for addressing key limitations in microbial syntheses.
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157729</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Listening by Synthesizing</title>
<link>https://hdl.handle.net/1721.1/157728</link>
<description>Listening by Synthesizing
Cherep, Manuel
Generative audio models offer a scalable solution for producing a rich variety of sounds. This can be useful for practical tasks, like sound design in music, film, and other media. However, these models overwhelmingly rely on deep neural networks, and their massive complexity hinders our ability to fully leverage them in many scenarios, as they are not easily controllable or interpretable. In this thesis, I propose an alternate approach that relies on a virtual modular synthesizer; a computational model with modules for controlling, generating, and processing sound that connect together to produce diverse sounds. This approach has the advantage of using only a small number of physically-motivated parameters, each of which is intuitively controllable and causally interpretable in terms of its influence on the output sound. This design takes inspiration from devices long used in sound design and combines it with state-of-the-art machine learning techniques. In this thesis, I present three projects that use this formulation. The first is SynthAX, an accelerated virtual modular synthesizer that implements the core computational elements in an accelerated framework. The second, CTAG, combines the synthesizer with an audio-language model into a novel method for text-to-audio synthesis via parameter inference. This method produces more abstract sketch-like sounds that are distinctive, perceived as artistic, and yet similarly identifiable to recent neural audio synthesis models. The third is audio doppelgängers, sounds generated by randomly perturbing the parameters of the synthesizer to create positive pairs for contrastive learning, encompassing more of the variety found in real-world recordings, with controlled variations in timbre, pitch, and temporal envelopes. This method offers an efficient alternative to collecting real-world data, producing robust audio representations that compete with real data on established audio classification benchmarks. This thesis contributes tools for understandably generating rich and diverse sounds, using them and their parameters for sound design and understanding at scale.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157728</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Piezoelectric single crystal based one-dimensional phased array for breast tissue imaging</title>
<link>https://hdl.handle.net/1721.1/157727</link>
<description>Piezoelectric single crystal based one-dimensional phased array for breast tissue imaging
Du, Wenya
Ultrasound is widely used in clinical practice because it is safe, non-invasive, non-ionizing, low-cost, and provides real-time imaging, monitoring, and therapy. However, conventional ultrasound probes are rigid, pressure-required, and operator-dependent. Replacing rigid transducers with conformable ultrasound transducer arrays can allow image acquisition on curved body parts, improve image quality, and enable functions such as long-term monitoring. In this thesis, I propose a conformable ultrasound breast patch (cUSBr-Patch) consisting of a one-dimensional (1D) phased array and a nature-inspired patch design, which offers large-area, deep tissue scanning and multi-angle, repeatable breast imaging while avoiding the drawbacks of conventional ultrasound imaging technologies. I used a Yb/Bi-doped PIN-PMN-PT single crystal as the active element due to its superior piezoelectric properties (d33 = 2,800 pC/N, εr = 7,000, k33 = 0.93). I then fabricated a 1D phased array transducer consisting of 64 elements with an operational frequency of 7.0 MHz. The 1D array exhibits promising acoustic performance with i) a maximum imaging depth of 80 mm, ii) contrast sensitivity of 3 dB, iii) axial/lateral resolutions of 0.25/1.0 mm at 30 mm depth, and iv) a larger field of view than the commercial handheld linear probe at depths of approximately 30 mm or deeper, indicating a potential reliable capability to detect early-stage breast tumors. Beyond this, comprehensive in vitro experimental studies establish that the cUSBr-Patch can provide accurate and reproducible imaging of different phantoms. The clinical trials reveal that the patch exhibits a sufficient contrast resolution (~3 dB) and axial/lateral resolutions of 0.25/1.0 mm at 30 mm depth, allowing the observation of small cysts (~ 0.3 cm) in the breast. This research develops a first-of-its-kind ultrasound technology for breast tissue scanning and imaging which offers a non-invasive method for tracking real-time dynamic changes of soft tissue.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157727</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrophilic C(sp²)–H Cyanation with Inorganic Cyanate (OCN⁻) by Pᴵᴵᴵ/Pⱽ=O-Catalyzed Phase Transfer Activation</title>
<link>https://hdl.handle.net/1721.1/157726</link>
<description>Electrophilic C(sp²)–H Cyanation with Inorganic Cyanate (OCN⁻) by Pᴵᴵᴵ/Pⱽ=O-Catalyzed Phase Transfer Activation
Hu, Shicheng
A catalytic method for the direct electrophilic cyanation of C(sp²)–H nucleophiles with sodium cyanate (NaOCN) is reported. Mechanistic experiments show that under solid-liquid phase transfer, an inorganic cyanate is activated by halide displacement on a halophosphonium. Redox catalysis is enabled by the usage of a strained phosphine (phosphetane) so that catalyst turnover from phosphine oxide to phosphine can be easily achieved by the usage of a terminal hydrosilane reductant. These results demonstrate the feasibility of deoxyfunctionalization of insoluble inorganic salts by Pᴵᴵᴵ/Pⱽ=O catalyzed phase transfer activation, as exemplified by C(sp²)–H cyanation with NaOCN as the “CN⁺” source.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157726</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifidelity Methods for Design of Transition MetalComplexes</title>
<link>https://hdl.handle.net/1721.1/157725</link>
<description>Multifidelity Methods for Design of Transition MetalComplexes
Janet, Jon Paul
The rational design of materials with tightly controlled properties is crucial to addressing future challenges in energy, electronics and catalysis. While improvements in computing power have made simulation with density functional theory (DFT) an essential tool in screening new materials, it remains too costly to address truly high-dimensional design spaces. This problem is especially acute for open-shell transition metal (TM) complexes, which are of central importance in homogeneous catalysis and have applications in solar energy and electronics. The space of TM complexes is enormous and poorly characterized, while DFT calculations for these systems are expensive and sensitive to method choice, making it impractical to simulate large numbers of candidates indiscriminately. This makes the search for TM complexes with desired properties a formidable challenge. This thesis addresses this challenge by formulating strategies for materials design that exploit insights from data-driven surrogate models together with first-principles simulations. A framework for data-driven inference of the quantum properties of TM complexes is developed, using artificial neural networks (ANNs) and graph-based molecular representations that facilitate rapid screening while retaining physical meaning such that chemical insights can be extracted. Multiple sources of uncertainty that would limit the application of these methods to TM complexes are addressed. Surrogate models are trained to estimate system-specific DFT uncertainty by including data from DFT calculations with different fractions of exact exchange, and a novel uncertainty metric for data-driven discovery is proposed that quantifies the ability of ANNs to generalize to unseen data based on similarity in the learned latent space. This metric is shown to offer superior performance over existing methods. The application of these methods to virtual design problems is demonstrated with two case studies: 1) identifying spin crossover complexes from a design space of thousands using an evolutionary strategy and 2) probabilistic, multiobjective optimization of redox couples over a 3 million-complex space. The utility of this surrogate-assisted approach is evident and orders-of-magnitude accelerations are obtained over screening purely with DFT. Such strategies open the door for in silico design of some of the most challenging molecular systems at a far greater scale than ever before.
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157725</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interoceptive Interventions: Interfacing with Inner States</title>
<link>https://hdl.handle.net/1721.1/157724</link>
<description>Interoceptive Interventions: Interfacing with Inner States
Jain, Abhinandan
This thesis explores the emerging frontier of Human-Computer Interaction (HCI) that moves beyond traditional interfaces to directly modulate internal bodily processes, emotions, and cognitive states. As HCI progresses towards further integration between human and machine, this thesis investigates novel technologies that interface with interoceptive systems to influence subjective experiences and mental states. In this thesis, I introduce "Interoceptive Interventions" — tools designed to modulate physiological states. These tools interface with and alter internal physiological conditions, thereby influencing emotional and behavioral states.&#13;
&#13;
I present three individual proof-of-concept wearable prototypes “Frisson”, “ReCode”, and “Somnia” - grounded in neuroscience theories and evidence from embodied cognition. Frisson is a system targeted to elicit aesthetic chills and their downstream cognitive effects. I showcase experimental evidence of chills’ impact in modulation of emotional state, negative beliefs and amelioration of anhedonia in depression. Next, I present ReCode, a system which modulates baroreceptor activity and causally influences sympathetic activity (fight or flight response) and has consequential effect on perceived emotion and anxiety ratings. Finally, I present Somnia, a system which stimulates the vestibular system to influence sleep onset. These prototypes target specific pathways to enable on-demand emotion elicitation, emotion regulation and sleep regulation for users, while also providing potential non pharmacological interventions for conditions like insomnia, depression, and anxiety.&#13;
&#13;
This thesis aims to make a twofold contribution: First, it introduces a conceptual framework that highlights how interfacing with unconscious bodily processes opens up new possibilities for human computer interface design. Specifically, by gently actuating core physiological dynamics linked to consciousness and psychology, there is potential for such tools to deliver a promising new paradigm for digital wellness interventions. Second, the interoceptive modulation tools developed in this work provide a platform for researchers to experimentally engineer physiological processes underlying emotions and sleep. This could allow examining causative pathways between physiology and psychology beyond correlational observations and developing interventions for affective/sleep disorders. Researchers and designers can build on this to advance a generation of augmented technologies that empower users to self-regulate the body and the mind.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157724</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Last-Meter Delivery: Solving the Unattended Delivery Challenge from Streets to Doorsteps</title>
<link>https://hdl.handle.net/1721.1/157723</link>
<description>Last-Meter Delivery: Solving the Unattended Delivery Challenge from Streets to Doorsteps
Xiao, Wen-Xin
The rise of e-commerce has led to a surge in package deliveries, resulting in the proliferation of unattended delivery methods to address the "last-meter" problem – the challenge of delivering packages from the roadside or sidewalk to the customer's front door. This thesis proposes a methodology for implementing Large Language Model (LLM), and Vision Language Model (VLM) to enable delivery robots to identify the final delivery target and navigate the complex terrain from the curb to the front door. The proposed solution aims to enhance the autonomy and safety of last-mile delivery systems, addressing the "last-meter" challenge and improving the customer experience.&#13;
&#13;
This thesis presents a comprehensive overview of the last-meter delivery concept, aiming to bridge the gap between the roadside/sidewalk and the customer's front door. It begins by introducing the significance of last-meter delivery in the growing e-commerce industry and the challenges posed by unattended deliveries. The thesis then reviews the existing literature on autonomous and unmanned delivery systems, multimodal delivery approaches, and the application of large language models and vision language models in robotics. This research identifies the advancements and gaps in the field that the proposed methodology aims to address.&#13;
&#13;
The thesis primarily focuses on leveraging Large Language Models, the Segment Anything Model, and the open-source Florence-2 vision foundation model to enable the transmission of customers' delivery instructions to the final delivery target in the context of last-meter delivery. It outlines the methodology for data preparation, object detection and labeling, as well as the integration of Large Language Models to handle customer instructions and coordinate delivery target. It also describes the experimental design and methodologies employed to validate the effectiveness of the proposed system. This includes the use of a last-meter dataset and the evaluation of last-meter scene and target coordinate identification.&#13;
&#13;
The thesis concludes by summarizing the key findings and contributions, discussing the broader implications of the proposed methodology, and suggesting directions for future work, such as enhancing system robustness and scalability.&#13;
&#13;
KEYWORDS: Last-Mile Delivery, last-meter Delivery, Large Language Models (LLM), Vision Language Models (VLM), Robotics, Segment Anything Model (SAM), Open-Vocabulary&#13;
Object Detection (OVD).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157723</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imagine Yourself: Explorations in Fostering Personal Expression with Generative AI</title>
<link>https://hdl.handle.net/1721.1/157722</link>
<description>Imagine Yourself: Explorations in Fostering Personal Expression with Generative AI
Chadha, Karishma
Generative Artificial Intelligence (AI) technology has been promoted with many exciting promises to enhance human creativity. However, it has also been shown to amplify human bias and perpetuate harmful stereotypes. In the new age being ushered in by this technology, this thesis explores how educators and designers can use this technology to support young people in exploring and expressing aspects of their unique identities. In particular, I use a design based research methodology to iteratively create Imagine Yourself, a new digital experience adapting off-the-shelf text-to-image generation technology to support young people creating personal representations and stories.&#13;
Imagine Yourself combines OpenAI’s Dall-E 3 image generation technology with Scratch, a rich environment for young people to imagine and create interactive multimedia stories, animations, and more. Guided by a core value of designing for belonging, this project explores how experiences with generative AI can be designed to foster young people’s creative process in creating personally meaningful stories reflecting their own unique identities, experiences, and cultures. I discuss the iterative design process of creating Imagine Yourself in tandem with creative workshops, aiming to support more diverse representation within the image generation output and invite a tinkerable and iterative process of creating. I discuss observations and feedback from creative workshops with young people and adults, creating with Imagine Yourself. Finally, I conclude with reflections on the design process as well as a discussion of challenges, limitations,  opportunities, and open questions for future work incorporating generative AI into young people’s creative learning experiences.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157722</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Public Interest Computing: a Pluralistic Design Language Foundation for Societal-Machine Alignment</title>
<link>https://hdl.handle.net/1721.1/157721</link>
<description>Public Interest Computing: a Pluralistic Design Language Foundation for Societal-Machine Alignment
Gaikwad, Snehalkumar 'Neil' S.
The proliferation of algorithmic systems, including artificial intelligence (AI), in decision-making contexts necessitates a critical examination of their alignment with societal and environmental values. The reciprocal relationship between these norms and emerging AI technologies calls for a structural conceptualization of algorithmic systems that extends the scale of human-centered considerations. This dissertation introduces “Public Interest Computing, a Pluralistic Design Language,” which enables a novel design space for value-sensitive algorithmic ecosystems, fostering what we term “Societal-Machine Alignment.” The research is structured in three interconnected parts. First, we establish a comprehensive theory of Public Interest Computing, grounded in the planning and capability approach to human development. Second, we present a series of Public Interest Computing systems that instantiate and refine the proposed theoretical framework. These systems, co-designed with communities, demonstrate societal-machine alignment through five key design dimensions. Farm Pulse System exemplifies substantive fairness for at-risk farmers by enabling restorative justice through recourse in climate change adaptation decisions. Boomerang exhibits incentive alignment, promoting equitable designs of reputation systems in AI data markets. The Prototype Tasks System illustrates computationally mediated cognitive alignment, creating a level playing field for workers. The Beyond Boundaries framework enables environmental alignment, providing a platform for public discourse on climate change. Our analysis using Gobo focuses on value alignment, investigating ways to increase human agency in interactions with invisible algorithms on online platforms. Each system serves as an empirical testbed, providing critical design insights that shaped the theory and engineering of Public Interest Computing.&#13;
&#13;
The third part demonstrates the interplay between the developed Public Interest Computing systems and policy by applying the Pluralistic Design Language to real-world scenarios. We illustrate&#13;
the bidirectional relationship between technology and policy, showing how Public Interest Computing informs policy decisions (“AI for Policy”) and, conversely, how policy shapes the responsible development of AI systems (“Policy for AI”). This symbiotic relationship opens new avenues for&#13;
evidence-based policymaking, with Public Interest Computing serving as a foundation. By synthesizing the insights gained from this demonstration, we offer a principled approach for future&#13;
research and practice, paving the way for a more informed and responsible design of algorithmic&#13;
systems that aligns with societal values and priorities.&#13;
Public Interest Computing and its Pluralistic Design Language serve as a guiding lens, leading us&#13;
towards a future where societal values and algorithmic ecosystems are inherently aligned. Public&#13;
Interest Computing is not an end in itself but a means for understanding, reflection, and adaptation, ensuring that as technology advances, so does our commitment to aligning it with the greater&#13;
good.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157721</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of Finite Element Methods and Satellite InSAR for Monitoring Deformations of a Large Tailings Dam</title>
<link>https://hdl.handle.net/1721.1/157720</link>
<description>Comparison of Finite Element Methods and Satellite InSAR for Monitoring Deformations of a Large Tailings Dam
Fetell, Robert Henry
Following the recent catastrophic failure of several mine tailings dams there has been much interest in the use of numerical modeling and remote sensing for monitoring the safety and stability of these structures. This thesis presents a case study that investigates the accuracy of InSAR measurements and the predictive capabilities of finite element models using ground truth surface and sub-surface monitoring data applied to the Zelazny Most (SW Poland) copper tailings storage facility.  This site has a well-documented history of lateral deformations in a critical section (XVIE) of the East dam that have been attributed to a deep-seated translation mechanism of shearing through the underlying Pliocene, glacial clays. Since 2014, operators of the facility have constructed a series of stabilizing berms at this critical section. We investigated the accuracy of InSAR over this period, ending in 2019, by analyzing 186 ascending Sentinel-1 C-band images and 219 descending images using Persistent Scatterer Interferometry and SARProzTM software, comparing results with two surface geodetic benchmarks. Finite element analyses of the structure required a 2D model of section XVIE. We developed and integrated a stratigraphic model for the foundation soils, the complete construction history of the dam (since 1975), and selected input parameters for constitutive models to represent the soil behavior (foundation soils, tailings, dyke and berm materials) using PlaxisTM software. Our results show that InSAR achieves very consistent agreement with geodetic measurements for vertical (Up-Down) and lateral (E-W) surface deformations, over a time period where construction was limited to raising of the dyke near the crest of the dam and berm construction at the toe. The InSAR data are also insightful in showing relatively uniform lateral deformations occurring over the face of the dam, consistent with the interpreted translational failure mechanism. In contrast, it has proved much more challenging to predict subsurface deformations by FE analyses. The computed movements reflect accumulation of deformations over multiple stages of construction and involve shearing through the complex foundation stratigraphy.  We were able to achieve credible estimates of lateral deformations within the range of laboratory shear strength properties published in the literature and using the Hardening Soil (HS) model for non-linear shear stress-strain properties. However, the predictions of surface settlements and lateral deformation are much less reliable and depend on undocumented properties of the tailings, phreatic conditions in the tailings and details of the construction history.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157720</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing for Connection with Inner Processes</title>
<link>https://hdl.handle.net/1721.1/157719</link>
<description>Designing for Connection with Inner Processes
Mindel, Jessica Rachel
At a time of division, it is more important than ever that we help each other feel truly understood. Today's online ecosystems offer us many new ways to communicate personal stories, often through fast-paced, reactive channels, but few if any technologies enable us to share what I posit to be a crucial component of how we implicitly understand each other: our inner processes, e.g., how we form our values and identities, navigate unspoken tensions in a community, or feel that something resonates with us.&#13;
&#13;
This thesis explores inner processes as a resource for the design of systems that support human connection, interpersonal understanding, and reflection. Through a series of design iterations, I weigh approaches to eliciting inner processes, choosing media to externally, evocatively represent them, and encouraging perspective-taking behavior by guiding users through each other's inner processes. I approach this topic through three streams of projects, grounded in literatures that outline guidelines for successful perspective-taking and the development of interpersonal closeness, and that assert the value of creative play in surfacing and communicating inner processes, supporting perspective-taking, making room for new social norms, and enabling reframing.&#13;
&#13;
First, I present our collaborative work on Closer Worlds, a two-player, AI-assisted game in which players generate a world they might both want to live in in order to scaffold an emotionally intimate conversation about their memories and shared values. Next, to better understand inner processes entangled with creative practice, I conduct interviews with creative practitioners about the relationships they build through their practice, and design and develop prototypes for implicitly retracing inferred versions of one's own or another person's creative process, capitalizing on room for interpretation. Prototypes include Sjuzet, a compass that anchors the latent space of a user's creative writing to a local map in order to prompt reflection as a user physically wanders through memories, and Pull It Together, a material speculation on textile swatches whose wear and tear modulates to correspond to invisible sociocultural tensions. Finally, I shift my focus to explicitly, informatively trading inner processes in my design of Metaswap, an asynchronous, written activity in which strangers compare annotations about inner processes that arise as they tell personal stories about an uncertainty they are working to resolve in their lives.&#13;
&#13;
Making inner processes explicit and prompting revisitation of them offered both benefits and drawbacks for connection and reflection, but revealed important questions. A mixed-methods analysis across this work presents tensions in the human and machine instinct to make inferences and assumptions about others, and offers opportunities for interpersonally insightful, vulnerable, and trusting conversation when computer-mediated communication and sense-making systems produce deep content rather than deep interactions. Through this work, I hope to lay the foundation for future research on technology's role in supporting interpersonal understanding at a time when so many subjectivities collide and are summarized at the speed of data.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157719</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluid-fluid displacement in porous-media microfluidics</title>
<link>https://hdl.handle.net/1721.1/157718</link>
<description>Fluid-fluid displacement in porous-media microfluidics
Qiu, Yu
Immiscible fluid-fluid displacement under geometric confinement is a key physical process in large-scale subsurface energy technologies such as geologic carbon sequestration and in small-scale microfluidic techniques. Research over the past few decades has provided improved understanding of the fluid-fluid displacement patterns on the macroscopic scale, which range from compact displacement to fractal pattern. Many questions remain, however, regarding how the macroscopic displacement patterns are controlled by the microscale interactions between the fluid interface and the solid surface in systems under geometric confinement like microfluidic devices and porous media. This fluid-solid interaction—exacerbated by the roughness inherent to all natural and engineered surfaces—introduces a large energy dissipation near the solid boundary that challenges our ability to interpret laboratory experiments and develop mathematical models. In Part I of this Thesis, we study the motion of a fluid-fluid interface at the scale of a single capillary through mathematical modeling and laboratory experiments. We first develop a phase-field model to simulate two-phase flow with moving contact lines in the partial wetting regime. We construct a self-consistent formulation of fluid-solid surface energy which allows prescribing arbitrary static contact angles. We then propose a formulation to account for nonequilibrium conditions near the contact line and demonstrate the ability of our model to simulate dynamic configurations, from spontaneous imbibition to wetting transition and interface pinch-off. We then experimentally study the shape of a moving interface in a capillary tube prewetted with the invading liquid. For viscously favorable displacements (when the invading fluid is more viscous than the defending fluid), we find a universal behavior of the dynamic contact angle—a macroscopic descriptor of interface shape—which increases monotonically with capillary number. In contrast, for viscously unfavorable displacements, we observe a sharp wetting transition where the dynamic contact angle shoots to 180 over a narrow range of flow rates. Above the transition, a trailing film of viscous defending fluid is left behind the displacement front and the invading fluid propagates along the tube center as a finger. We rationalize the emergence of this sharp, trailing-film type of wetting transition by means of a minimal-ingredients hydrodynamic theory that exhibits bifurcated solutions. In Part II of this Thesis, we investigate the role of surface roughness on twophase displacements. We do so in a microfluidic device with a precisely controlled structured surface as an analogue for a rough fracture. In the drainage regime, we show that the roughness induces two types of liquid films entrained on the solid surfaces behind the displacement front: the classical Bretherton “thick film”, and a new type of “thin film” that is confined within the roughness. Each type of liquid film is characterized by distinct stability criteria and dewetting dynamics. In the imbibition regime, we show that surface roughness promotes that the wetting liquid preferentially advances within the roughness layer. The formation of a leading film stabilizes the displacement front as the flow rate increases, which would otherwise— that is, in a smooth confinement—become fractal. In summary, our work sheds light on the microscale physics and macroscopic pattern formation in rough confinement that may control long-term mixing and reactivity in geological systems and lab-on-a-chip applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157718</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Interventions in Fine-grained Contexts for Habit Formation</title>
<link>https://hdl.handle.net/1721.1/157717</link>
<description>Investigating Interventions in Fine-grained Contexts for Habit Formation
Khan, Mina
Behavior change is important, yet hard to sustain. Habits are automatic responses to specific contextual cues, and can help sustain behavior change. Fine-grained specific contexts are commonly used in habit formation, but interventions in automatically-detected fine-grained contexts have rarely been explored for habit formation. &#13;
&#13;
We investigate habit-formation using interventions in fine-grained mobile, physical-world and digital, computer-based contexts, making three key contributions for each: a survey to identify behavior change needs, a prototype system designed to deliver fine-grained context-specific interventions, and a study to investigate habit-formation using interventions in fine-grained contexts, compared to interventions in less fine-grained contexts. We use the Self-report Habit Index (SRHI) and Self-Report Behavioral Automaticity Index (SRBAI) to measure habit formation and habit automaticity, respectively.&#13;
&#13;
For mobile, physical-world behavior change, the survey of needs (N=53 participants) indicated that participants want diverse and personalized behavior change support in diverse and specific contexts. We created a wearable device with on-device deep learning for interventions in personalized and privacy-preserving egocentric visual contexts. In a 4-week pilot study (N=10), interventions in egocentric visual contexts led to more percentage increase in average habit formation (SRHI) and automaticity (SRBAI) than interventions in coarse-grained contexts based on time, geolocation, and physical activity. The percentage increase in median habit formation was also more for the fine-grained egocentric context group, whereas the percentage increase in median habit automaticity was similar between the two groups. For both groups, the habits persisted in the post-study evaluations 1 and 10 weeks later, without interventions.&#13;
&#13;
For computer-usage behavior change, the survey of needs (N=68) indicated that participants want to reduce excessive/unnecessary use, e.g., social media, and found off-the-screen breaks helpful. We created a Chrome extension to deliver interventions based on specific web activities, and conducted a 6+2-week study (N=31, 6 weeks of interventions and 2 weeks of post-study without interventions). After 6 weeks, interventions in fine-grained website-entry-based contexts led to more percentage increase in mean and median habit formation and automaticity than interventions in coarse-grained interval-based or random contexts. After the additional two-week post-study, without interventions, the website-entry group had the largest percentage increase in mean SRHI/SRBAI, whereas the interval-based group had the largest percentage increase in median SRHI/SRBAI. &#13;
&#13;
Qualitative results from both studies indicated that interventions in fine-grained contexts were delivered at more opportune moments and were less disruptive. We discuss the limitations of our research and present a first step towards investigating interventions in fine-grained contexts for habit formation, potentially for sustainable behavior change, without long-term dependence on technology.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157717</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Vegetation Morphology on Turbulence and Bedload Transport</title>
<link>https://hdl.handle.net/1721.1/157716</link>
<description>The Impact of Vegetation Morphology on Turbulence and Bedload Transport
Zhao, Tian
By promoting sediment deposition and retention, aquatic vegetation can contribute to river bank stabilization, biodiversity, as well as carbon sequestration. The morphology and distribution of aquatic plants influence the velocity field, turbulence intensity, and sediment transport in wetlands, which impacts the erosion and deposition processes. By combining physical and numerical experiments, this thesis quantified how vegetation geometry impacts turbulence and sediment transport near the bed.&#13;
&#13;
In aquatic canopies, turbulence generated at the stem scale, and for submerged canopies, also in the canopy shear layer, could contribute to the near-bed turbulence. Results of flume experiments using a constant channel average velocity revealed that bedload transport was predominantly correlated with near-bed turbulence, but was also weakly correlated with near-bed velocity. First, in emergent canopies, if vegetation was not clustered, turbulent kinetic energy (TKE) and bedload transport did not depend on the arrangement and stem diameter(s) and can be predicted from plant biomass and velocity. If vegetation was clustered in patches, TKE and bedload transport decreased with increased clustering and can be predicted from plant biomass, patch geometry, and velocity. Second, in submerged canopies, for constant channel velocity, submerged canopies could enhance or reduce bedload transport, depending on their degree of submergence. With increasing submergence, H/h (defined as the ratio of flow depth H to canopy height h), the near-bed velocity and TKE decreased, and the source of near-bed turbulence shifted from stem wake to the shear layer at the canopy top. A model to predict near-bed TKE in submerged canopies was developed and used to explore bedload transport under more realistic conditions with constant energy slope and flexible vegetation. For a constant energy slope, the denser the canopy, and/or the larger fraction of flow depth occupied by the canopy (decreasing H/h), the greater the sediment transport was reduced compared to unvegetated beds. This thesis provides essential parameterizations of vegetation to hydrodynamic and morphodynamic models, which can be used to predict the vegetation conditions that promote or diminish erosion, offering a useful guide for river and coastal restoration.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157716</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the non-microbial sources and sinks of dissolved metabolites in seawater</title>
<link>https://hdl.handle.net/1721.1/157715</link>
<description>On the non-microbial sources and sinks of dissolved metabolites in seawater
Germolus, Noah Paul
Dissolved marine metabolites are small (&lt;1000 Da) organic chemicals that remain in seawater when passed through a filter (typically &lt;0.2 µm pore size). Their name implies their biological function: to be produced and consumed by cellular metabolism. These chemicals are the flows of the “microbial loop”—the principle that most of the photosynthesized matter in the ocean is exchanged, respired, and restructured by single-celled organisms. Metabolites have critical biological utility, so they are considered extremely labile; estimates of the time each spends outside cells range from hours to days. Their concentrations are drawn down by their consumers to nanomolar and picomolar levels, making measurement difficult. However, improved techniques to measure metabolites simultaneously and at extremely low concentrations avail the question of what happens to metabolites outside the cell membrane. Conventionally, representations of labile DOM exchange networks avoid that question—metabolites’ short lifetimes imply their flows lead from one organism to the next. This thesis begins to interrogate that assumption, asking if there are other processes that could change the seawater exometabolome on time scales that are relevant to microbial life. In Chapter 1 I discuss the ways ambient metabolite pools could be affected by animals, chemistry, and physics. In Chapter 2 I investigate the photolysis of metabolites and examine metabolomic techniques’ suitability for such experiments. In simulated sunlight, 11 of 57 metabolites decayed to some extent in artificial or natural seawater, and tryptophan and kynurenine may decay rapidly in the mixed layer of an oligotrophic ocean. For Chapter 3, I captured five species of migratory zooplankton and measured metabolites in their dissolved excreta. Four species survived the experiment and produced 43 metabolites, many at a rate that should be measurable in field samples. Chapter 4 harnesses the previous two chapters, plus a model for physical mixing, to probe a field dataset comprising 60 metabolites from Hydrostation S (south of Bermuda). Based on eight profiles over the course of two days, I posit: (1) copepods alone can supply the entire demand of &gt;20 compounds to the mixed layer; (2) mixing is rapid enough to erase input signatures in the mixed layer; and (3) photochemistry is a slow leak of metabolites to forms whose lability is yet unknown. Chapter 5 reflects on how metabolites break the microbial loop—and suture it together with more ecological richness than with elemental fluxes alone.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157715</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creativity and Justice: Leveraging Creative Learning Principles to Co-Design Just Futures With and For Young People</title>
<link>https://hdl.handle.net/1721.1/157714</link>
<description>Creativity and Justice: Leveraging Creative Learning Principles to Co-Design Just Futures With and For Young People
Trapp, Jaleesa
Young people who live in underserved and under-resourced communities and have access to a creative learning environment are poised to create positive change within their communities. Their lived experiences make them experts on the issues their communities are currently facing, and the creative learning environment lends itself as a space where young people can prototype, improve, and implement solutions. Young people can use their imagination and creativity to seek justice and re-imagine their communities.&#13;
&#13;
This dissertation examines the Youth Activism and Advocacy program, which I designed using a transformative justice framework, in collaboration with the Clubhouse Network, a global network of after-school centers in historically under-resourced communities. Young people in ten communities around the world used their creativity, lived experiences, and civic imagination to develop and sustain social justice campaigns in their communities.  This dissertation addresses the following research questions: (1) How might we cultivate and support constructionist learning environments that serve young people from communities that have been marginalized? (2) How might we use computational tools to support creative learning while developing and amplifying social justice campaigns? (3) How might we use Human Centered Design methods to allow for meaningful participation and engagement from youth who have been marginalized?&#13;
&#13;
While there were multiple pathways into and motivations for engaging in community action projects, all of the young people gained technical, organizational, and leadership skills that can be applied in future education and career pursuits. The outcomes of the Youth Activism and Advocacy program are complex and intertwined, prompting a call to action to further examine how civic engagement and creative learning can broaden participation in STEM and computing fields—and support youth in making a positive impact in their communities, moving them towards greater justice.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157714</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Matters of Illuminance - Transforming Light into Material Artifacts</title>
<link>https://hdl.handle.net/1721.1/157713</link>
<description>Matters of Illuminance - Transforming Light into Material Artifacts
Callender III, Dexter
This research explores a process to transform light into physical artifacts. It develops a series of custom software systems to capture images of sunlight moving through a building and transform them into three-dimensional forms. It uses digital manufacturing methods to create the three-dimensional forms out of glass. The aim of this work is 1) to construct a methodology for recording light’s interaction with architecture as three-dimensional forms 2) to produce glass sculptures that exist in a fine art setting and contribute to the lineage of 21st century light artists. The academic contribution of this research builds upon the autographic design framework defined by Dietmar Offenhuber. Offenhuber describes the autographic design process as “the practice of shaping the conditions that allow traces to emerge and guiding their interpretation to demonstrate causality and evidence”.1 The technique I use to transform light into three-dimensional forms follows the four steps of the autographic design process. The goal of this technique is to provide a repeatable process and data format that captures information about light’s interaction with architecture at specific locations. The process produces three-dimensional forms, physical glass sculptures, and media that guide their interpretation, which can be interpreted to provide insight on the design and history of the building. The artistic contribution of this research produces glass sculptures that physicalize the shapes of light I observed and recorded at the location. The goal of these sculptures is to create meaningful physical artworks that reflect the nuanced shapes and subtle aesthetic qualities of natural light. Exhibiting the sculptures in spaces that are abundant with natural light creates new interactions between the glass and the light, offering unique visual experiences that change over time. I bolster these artworks with experiential accounts of my time spent in the building. The artwork I produced as part of this research was exhibited at the Wiesner Gallery at MIT and aims to exist in a fine arts setting, contributing to the lineage of Light &amp; Space artists such as Larry Bell and Robert Irwin.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157713</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond-the-Ice: Designing Games for Facilitating Deeper Conversations</title>
<link>https://hdl.handle.net/1721.1/157712</link>
<description>Beyond-the-Ice: Designing Games for Facilitating Deeper Conversations
Lee, Cassandra
In this age of constant communication, we’ve never been more connected, yet all of our numerous, fast, and convenient connections lack the depth and intimacy we truly crave. The desire for more authentic social experiences necessitates vulnerability, honesty, and risk; but introducing such dynamics presents a great challenge in the context of the wider landscape of public discourse. Designers across disciplines have suggested using games to facilitate stronger social connection, since the structures within games can expose players to alternate social norms and encourage risk-taking. However, few have designed games that specifically foster more intimate forms of dialogue or offer scaffolding for players to see the act of sharing authentically and listening deeply as ways to play. In this thesis, I explore the novel intersection between play, intimate conversation, and technology by presenting a variety of prototypes and fully developed games that employ innovative mechanics designed to facilitate authenticity, vulnerability, complexity, and subjectivity. This work builds on formal knowledge from the social sciences, HCI, and game design, as well as informal knowledge from facilitation, gathering practices, party games, and Tarot, by presenting five distinct design principles aligned with theories grounded in past work: 1) Make emotional disclosure special; 2) Scaffold responsiveness; 3) Approach depth through fun; 4) Empower “the work” through constraints and permissions; 5) Center objects to feel with. Following a thorough Research through Design (RfD) method, I designed 15 unique prototypes and proof-of-concepts which explore various aspects of the five principles. Two of the games were designed, developed, playtested, and evaluated – Analogia, a card game that uses generative images to inspire emotion-rich conversations and Crossroads, a digital game where players are guided to unlock a secret insight by co-creating generative images inspired one another’s real experiences. This work contributes two well-tested games that evoke five compelling principles; a series of mechanics for stimulating dialogue (dual-stimulus, bridge-and-tunnel, image scrying, listener roles); and pilot data from playtests that demonstrate the ability and challenge of these mechanics to create conversational outcomes. Additionally, both spotlighted games creatively employ generative artificial intelligence (AI) to help mediate player interactions through image interpretation and co-creation. Although this is a thesis about conversation games, it critically engages with the current social zeitgeist, provides widely applicable insights and presents nuanced ways to think about the future of social-technical systems that seek to encourage deeper, more authentic ways of connecting.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157712</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Textile Macroelectronics: Architecting Sensate and Computational Fabrics across Scales</title>
<link>https://hdl.handle.net/1721.1/157711</link>
<description>Textile Macroelectronics: Architecting Sensate and Computational Fabrics across Scales
Wicaksono, Irmandy
Textiles are omnipresent and among the oldest forms of art and culture in human civilization. They serve as our protective skin, the interface between our bodies and the environment, and a medium for self-expression and collective experience. As electronics become more compliant, miniaturized, and low-cost, textiles provide an ideal substrate for technology integration, further driving the era of ubiquitous computing. My research fuses recent advances in functional materials, digital fabrication, hardware systems, and immersive technologies to demonstrate Textile Macroelectronics architecture and develop sensate and computational fabrics across scales.&#13;
&#13;
In this dissertation, I propose a ubiquitous computational textile framework—a synergy between functional device selection, textile structures, fabrication tools, and system architecture—that integrates a distributed network of sensing and computational elements as primitives or raw materials in the manufacturing process of electronic textile products. In the first part of the dissertation, I present several methods, artifacts, and implementations of sensate textiles using functional fibers and digital machine knitting. I argue that to promote the disruption and adoption of sensate textiles and achieve seamless integration, we require a better hierarchical understanding of textile construction and fiber-fabric properties, as well as ways to integrate electronics and functionalities with industrial textile fabrication processes. By controlling functional and common yarn inputs, along with knitting structures and patterns, I can architect fabric forms and aesthetics while tuning their electrical and mechanical properties. With this approach, I have developed a set of custom proxemic and tactile textile interfaces based on capacitive and piezoresistive sensing for musical expression, human-computer interaction, activity recognition, and multi-sensory experiences in various forms such as cloth, footwear, mats, carpets, and large-scale architectural facades.&#13;
&#13;
In the second part of the dissertation, I will discuss my work in exploring flexible, stretchable, and soft printed circuit technologies, incorporating multi-modal sensing with distributed computation to address scalability issues inherent in large and dense sensate textiles. These efforts have led to unique power, interconnection, and networking paradigms that allow us to transition from application-specific sensate textiles to generic computational fabrics that can be tailored and programmed for various applications. Finally, through these collective and complementary efforts, I aim to demonstrate an ecosystem of fabric artifacts that will lead us toward an Electronic Textile Gaia—a vision where sensing and intelligence are seamlessly interconnected and integrated into the fabric of everyday life, from in-body, on-body, room-scale, to architectural textiles, for applications ranging from physiological and physical activity monitoring to interactive media and built environments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157711</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Perceptual Augmentation</title>
<link>https://hdl.handle.net/1721.1/157710</link>
<description>Towards Perceptual Augmentation
Chin, Sam
This thesis explores the concept of perceptual augmentation, focusing on expanding human sensory capabilities beyond their biological limitations. It challenges traditional approaches to sensory enhancement by emphasizing the importance of perception over mere sensory input. Drawing inspiration from the diverse sensory abilities found in nature, the research aims to develop methods for meaningful augmentation of human perception that can impact daily life. The study adopts an ecological approach to perceptual augmentation, grounded in Gibsonian ecological psychology. Key principles include providing correct mental models of augmentation devices, leveraging environmental training and natural tasks, emphasizing multisensory interfaces with sensorimotor feedback, and creating affordances that mimic the natural world. This approach seeks to facilitate perceptual learning through natural interaction with the environment, rather than relying on extensive explicit training.&#13;
The thesis presents early work in exploring and evaluating individual principles of this ecological framework for perceptual augmentation. While acknowledging the gap between the proposed theoretical approach and current research outcomes, the studies conducted focus on augmenting perception for specific tasks such as pitch interval perception, pilot situation awareness, and sleep staging.  The research does not yet demonstrate a generalized, "all-purpose" augmented sense, but lays groundwork for future investigations, including a proposed experiment to mitigate age-related hearing loss using the developed principles.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157710</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temporal Telepresence: Immersive Interfaces for TeleAbsence</title>
<link>https://hdl.handle.net/1721.1/157709</link>
<description>Temporal Telepresence: Immersive Interfaces for TeleAbsence
Pillis, D.
To store the past in a simulation may enable greater understanding of ourselves, our stories, and our histories. The urge to capture our past into networks of photographic, written, filmed, and object-based narratives has long been a means for individuals to identify change, growth, and gain perspective on themselves. Using a dataset of human narratives derived from records and ephemera, this thesis explores a novel approach to preserving and interacting with memories. We present an interactive system of objects and applications that supports intergenerational memory preservation by enabling individuals to actively explore the relationship between personal artifacts, photographs, the spaces of their past, and their memories. This system integrates personal digital twins, photogrammetry, Gaussian splatting, and tangible interfaces to create a new way of experiencing the past, based on interactivity with architectural artifacts and simulations from an individual’s life. Using an iterative participatory design process, we developed a set of multisensory interaction experiences that allow individuals to explore their relationship to autobiographical memory. The system dynamically links autobiographical memories with the environments where they took place, responding to text, photo, and object-based interactions. This experience invites individuals to modify their recollections by exploring how photo, video, and 3D space relate to the experience of revisiting narratives from the past. Applications of this system include assisting with dementia, aging, memory loss, and Alzheimer’s. Our initial studies were promising. When using the simulation system, individuals spent more time reminiscing, discussing more memories, and experiencing greater presence in their recollections than without the interactive paradigm. The system also encouraged family members to reinforce their memories by actively re-encoding them through the simulation interfaces. Results demonstrated that presence in memories seemed more vivid, detailed, and spatially accurate than before the intervention. The result is a new memory-sharing experience that benefits individuals and families by allowing them to understand how their interactions with the past can be enriched through the integration of artifacts and simulations that impact the development of autobiographical memory.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157709</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-Benefit Assessment of Pandemic Virus Identification</title>
<link>https://hdl.handle.net/1721.1/157708</link>
<description>Risk-Benefit Assessment of Pandemic Virus Identification
Jeyapragasan, Geetha
Pandemic Virus Identification (PVI) aims to assess unknown viruses for their pandemic potential in immunologically naive human populations. While proponents argue that PVI could facilitate targeted spillover prevention and accelerate medical countermeasure development, critics raise concerns about biosafety and biosecurity risks. This thesis presents a comprehensive mathematical framework to evaluate the benefits, biosafety risks, and biosecurity risks associated with PVI research.&#13;
&#13;
Using a combination of mathematical modeling and expert elicitation, we developed a structured approach to estimate the potential impacts of PVI. Our framework suggests that identifying a single pandemic-capable virus through PVI could potentially save lives by reducing natural pandemic risks. However, this benefit is substantially outweighed by the estimated anthropogenic risks from potential accidental pandemic events and deliberate misuse scenarios. The overall expected value of identifying a single pandemic-capable pathogen was estimated to be strongly negative. &#13;
&#13;
Significant uncertainty exists in many key parameters estimated through surveys, with wide confidence intervals reflecting the lack of consensus among experts. Expert opinions varied considerably on topics such as the likelihood of funding for medical countermeasures and the potential for deliberate misuse of pandemic agents. This modeling work primarily aims to provide exploratory estimates to guide future work. &#13;
&#13;
Our findings underscore the urgent need for improved governance of research involving potential pandemic pathogens. This study provides a quantitative basis for ongoing discussions about the balance between scientific advancement and public safety in high-risk areas of life sciences research.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157708</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-bounce Returns for Specular Surface Mapping from Consumer-grade Flash LiDAR</title>
<link>https://hdl.handle.net/1721.1/157707</link>
<description>Multi-bounce Returns for Specular Surface Mapping from Consumer-grade Flash LiDAR
Lin, Tsung-Han
This thesis proposes an approach to leverage multi-bounce returns of a flash LiDAR on portable smartphones for 3D specular surface reconstruction. This is an important research problem as most traditional LiDAR systems fail to detect specular surfaces. As mirror and glass are everywhere, vision systems failing to detect specular surface can be detrimental. Applications like mapping may become inaccurate, and more critically, robots could crash into undetected windows during navigation, leading to potentially fatal outcomes. We perceive this work can impactfully enhance the robustness of specular surface detection, with LiDAR complementing any kind of vision system, particularly image-based.&#13;
&#13;
Traditional LiDAR systems typically assume that all returns are single-bounce, which can lead to inaccurate representations of specular surfaces like mirrors or glass, often causing them to appear as though there is a hole. In contrast, this approach models the multi-bounce paths, providing a more accurate reconstruction of these specular surfaces.&#13;
&#13;
We operate with a consumer-grade LiDAR that does not require manual calibration and can be operated in real-time on an affordable and portable smartphone. Consumer-grade LiDAR multi-beam flash LiDAR is challenging with its coarse resolution, co-located sensors, and multiplexing setup. In face of these challenges, we propose to solve the association problem with the `reciprocal pair’ algorithm, which can discern different types of bounces from the multi-bounce returns.&#13;
&#13;
The algorithm is shown to detect over multiple consecutive frames for dense mirror mapping. In addition to 3D reconstruction, we show multi-bounce returns help to enhance performances on applications such as segmentation and novel view synthesis. Our method can be combined with these state-of-the-art learned-based model, enhancing its robustness by discerning ambiguous scenarios. In general, this approach can map various specular surfaces like mirrors and glasses, without making assumption about particular specular surface shapes, and can operate on non-perpendicular specular-diffuse surface pairs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157707</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Timbral Transformations</title>
<link>https://hdl.handle.net/1721.1/157706</link>
<description>Timbral Transformations
Shand, Jessica
From folk songs to festivals, cafes to concert halls, and religious rituals to recording studios, the flute has long had a shapeshifting, cross-cultural presence. This thesis leverages 21stcentury technologies not only to explore and extend the timbral versatility of flutes, but also to underscore the performative, fluid, and ever-evolving nature of timbre more generally. At the core of the project is the creation of sequences of discrete sounds that interpolate between semantic categories and a collection of fixed media compositions based on those sequences, both of which consist entirely of flute sounds that have undergone varying degrees of electronic manipulation. By means of digital signal processing techniques, the flute wavers in and out of a multitude of sonic identities. Sometimes, it masquerades as another familiar object or interface (e.g., a ticking clock) or abstractly evokes a concept or phenomenon (e.g., a storm); at other times, it beckons toward the ethereal or ineffable, resisting indexical identification altogether. With source materials warped, layered, and splayed across the frequency spectrum, such concerns as “the real” and “the true” begin to move out of focus, making way for attention to embodied phenomenological experiences of sound. As this thesis positions compositional practice as a form of research, its outputs range from the conceptual to the creative and the computational. In addition to the music at its core, the project interfaces with gender studies in its original exposition on timbre and timbral identity, includes a rigorous set of experiments with human and machine listeners, and makes original applications of multimodal language models not before seen in musicology or music theory. A live performance incorporating each of these project vectors and an audience discussion following the event offer further opportunities for reflection and critique.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157706</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steam boilers</title>
<link>https://hdl.handle.net/1721.1/157657</link>
<description>Steam boilers
Dennett, C. L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157657</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave propagation sensors for structural control</title>
<link>https://hdl.handle.net/1721.1/157654</link>
<description>Wave propagation sensors for structural control
Pines, Darryll J.
            (Darryll John)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1992; Includes bibliographical references (leaves 166-172).
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157654</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practice of consulting firms in corporate strategic planning.</title>
<link>https://hdl.handle.net/1721.1/157653</link>
<description>Practice of consulting firms in corporate strategic planning.
Chapman, Beverly Jean.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Bibliography: leaves 82-84.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157653</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Derived distribution of water volume above a given threshold discharge.</title>
<link>https://hdl.handle.net/1721.1/157652</link>
<description>Derived distribution of water volume above a given threshold discharge.
Chan, Siu-On.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography : leaves 138-139.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157652</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global solvability of invariant differential operators.</title>
<link>https://hdl.handle.net/1721.1/157651</link>
<description>Global solvability of invariant differential operators.
Zhang, Weida.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1978; Vita.; Bibliography: leaves 96-97.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157651</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Locomotive engineering</title>
<link>https://hdl.handle.net/1721.1/157650</link>
<description>Locomotive engineering
Galloupe, Francis E.
            (Francis Ellis)
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157650</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The efficiency of marine engines</title>
<link>https://hdl.handle.net/1721.1/157649</link>
<description>The efficiency of marine engines
Main, Charles T.
            (Charles Thomas),
            1856-1943.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157649</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Treatment of vershire copper ore</title>
<link>https://hdl.handle.net/1721.1/157648</link>
<description>Treatment of vershire copper ore
Adams, W. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1878
</description>
<pubDate>Tue, 01 Jan 1878 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157648</guid>
<dc:date>1878-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explanations and calculations accompanying a thesis design for a town hall</title>
<link>https://hdl.handle.net/1721.1/157647</link>
<description>Explanations and calculations accompanying a thesis design for a town hall
Capen, G. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157647</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the working for silver and gold of a middle grade product from ore of the Merrimac Mine, Newburyport</title>
<link>https://hdl.handle.net/1721.1/157646</link>
<description>Report on the working for silver and gold of a middle grade product from ore of the Merrimac Mine, Newburyport
Jenney, Walter.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157646</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Federal Urban Renewal Program: a financial and economic analysis</title>
<link>https://hdl.handle.net/1721.1/157645</link>
<description>The Federal Urban Renewal Program: a financial and economic analysis
Anderson, Martin Carl.
Thesis: Ph. D., Massachusetts Institute of Technology, School of Industrial Management, 1962; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157645</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The colors of our common lights</title>
<link>https://hdl.handle.net/1721.1/157644</link>
<description>The colors of our common lights
Pickering, Wm. H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1879
</description>
<pubDate>Wed, 01 Jan 1879 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157644</guid>
<dc:date>1879-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New experiments in sound</title>
<link>https://hdl.handle.net/1721.1/157643</link>
<description>New experiments in sound
Jacques, William W.,
            1855-1932.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157643</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The electrodeposition of iron from perchlorate solutions</title>
<link>https://hdl.handle.net/1721.1/157642</link>
<description>The electrodeposition of iron from perchlorate solutions
Johnson, Algot J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrochemical Engineering, 1921; Includes bibliographical references (leaves 40-41).
</description>
<pubDate>Sat, 01 Jan 1921 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157642</guid>
<dc:date>1921-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A crosscorrelation method for measuring the impulse response of reactor systems</title>
<link>https://hdl.handle.net/1721.1/157641</link>
<description>A crosscorrelation method for measuring the impulse response of reactor systems
Balcomb, J. Douglas.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1961; Includes bibliographical references (leaves 136-137).
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157641</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wear studies of abrasive particles</title>
<link>https://hdl.handle.net/1721.1/157640</link>
<description>Wear studies of abrasive particles
Distel, Joseph William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1956; Bibliography: leaf 50.
</description>
<pubDate>Sun, 01 Jan 1956 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157640</guid>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wear studies of single aluminum oxide grains during grinding</title>
<link>https://hdl.handle.net/1721.1/157639</link>
<description>Wear studies of single aluminum oxide grains during grinding
Cole, John M.
            (John Martin)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1955; Includes bibliographical references (leaf 48).
</description>
<pubDate>Sat, 01 Jan 1955 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157639</guid>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forces in internal grinding</title>
<link>https://hdl.handle.net/1721.1/157638</link>
<description>Forces in internal grinding
Reichenbach, George S.
            (George Sheridan)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1952; Includes bibliographical references (leaves 28-29).
</description>
<pubDate>Tue, 01 Jan 1952 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157638</guid>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The mechanics of dry surface grinding</title>
<link>https://hdl.handle.net/1721.1/157637</link>
<description>The mechanics of dry surface grinding
Marshall, Earle Robert,
            1919-
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1949
</description>
<pubDate>Sat, 01 Jan 1949 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157637</guid>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compilation of general descriptions and data of pumps manufactured in the United States</title>
<link>https://hdl.handle.net/1721.1/157636</link>
<description>Compilation of general descriptions and data of pumps manufactured in the United States
Zaworski, Robert Joseph.; Anderson, Donald E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1947; Bibliography: leaves [1-15].
</description>
<pubDate>Wed, 01 Jan 1947 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157636</guid>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An application of Bernoulli polynomials to the theory of cyclotomic fields.</title>
<link>https://hdl.handle.net/1721.1/157635</link>
<description>An application of Bernoulli polynomials to the theory of cyclotomic fields.
Segal, Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1965
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157635</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction of aflatoxin B1 with DNA in vivo in the rat and mouse</title>
<link>https://hdl.handle.net/1721.1/157634</link>
<description>Interaction of aflatoxin B1 with DNA in vivo in the rat and mouse
Croy, Robert George.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1979; Vita.; Bibliography: leaves 153-165.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157634</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Loss of dignity : social dangers of a computerized society</title>
<link>https://hdl.handle.net/1721.1/157633</link>
<description>Loss of dignity : social dangers of a computerized society
Yablon, Jay Russell.
Thesis: B.S., Massachusetts Institute of Technology, Department of Political Science, 1976; Includes bibliographical references (leaves 201-206).
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157633</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Short term distortion in dynamic noise-filters.</title>
<link>https://hdl.handle.net/1721.1/157632</link>
<description>Short term distortion in dynamic noise-filters.
Wright, John Nelson.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157632</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamic and heat transfer evaluation of a thermic solar panel</title>
<link>https://hdl.handle.net/1721.1/157631</link>
<description>Thermodynamic and heat transfer evaluation of a thermic solar panel
Yasuda, Arthur Kenichi.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157631</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of adhesion mechanisms</title>
<link>https://hdl.handle.net/1721.1/157630</link>
<description>An investigation of adhesion mechanisms
Yee, Geary Yee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157630</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specificity and structural characterization of the PDZ domain from DegS, an extracytoplasmic E. coli protease</title>
<link>https://hdl.handle.net/1721.1/157629</link>
<description>Specificity and structural characterization of the PDZ domain from DegS, an extracytoplasmic E. coli protease
Walsh, Nathan P.
            (Nathan Peter),
            1973-
DegS is a membrane-bound bacterial protease that is involved in the extracytoplasmic-stress response. The C-terminal domain has limited homology to PDZ domains and was thought to be involved in regulation or substrate recognition. A model of this PDZ domain was generated from NMR solution studies and homology modeling. Peptide selection studies identified the sequence Tyr-Tyr-Phe (YYF) as a C-terminal motif that binds to the PDZ domain. Possible targets were identified including many of the outer-membrane proteins (OMPs), which contain both a conserved terminal YxF and internal YYF sequences. The binding of the DegS PDZ domain to a YYF peptide and OMP derivatives were confirmed using microcalorimetry. Because stress signaling can be triggered by over-expression of some of the outer-membrane proteins, I propose that DegS may receive a signal from unassembled OMPs and transmit it to the aE transcription factor by increasing proteolysis of RseA.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, February 2002; Includes bibliographical references (p. 87-95).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157629</guid>
</item>
<item>
<title>3-D Topology Optimization of Spatially Averaged Surface-Enhanced Raman Devices</title>
<link>https://hdl.handle.net/1721.1/157601</link>
<description>3-D Topology Optimization of Spatially Averaged Surface-Enhanced Raman Devices
Hammond, Ian M.
Numerous nanophotonics applications necessitate designs that enhance distributed incoherent emission. Representative applications include light-emitting diodes, thermal emitters, and Raman sensing. Previous efforts in full-scale topology optimization for Surface Enhanced Raman Sensing (SERS) have predominantly focused on single particle emissions or two-dimensional systems, which are impractical for actual fabrication. An objective function represented by ട|E|⁴dV effectively approximates Raman enhancement. This function tends to diverge near sharp tips and other singular geometries in three-dimensional spaces for relevent materials. This thesis delves into methodologies for regularizing the optimization process to preclude the formation of such problematic geometries. Additionally, it integrates lithography constraints to ensure that the optimized SERS substrates are viable for fabrication. To align with computational limits, various strategies are employed to make the system manageable. The techniques developed in this study facilitate the practical design of 3-D systems that enhance incoherent emission through topology optimization.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157601</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photocatalysis in a New Light: A Biohybrid Approach for Improved Reactivity with Tunable, Low-Energy Light Excitation</title>
<link>https://hdl.handle.net/1721.1/157600</link>
<description>Photocatalysis in a New Light: A Biohybrid Approach for Improved Reactivity with Tunable, Low-Energy Light Excitation
Cesana, Paul T.
Since the advent of photoredox catalysis, much thought has been devoted to the development of exciting reaction modalities and the catalysts which perform these reactions. Less thought has been placed into the specific aspects of light absorption as the key step in photocatalytic mechanisms. Natural photosynthetic systems drive the high-energy reactions of photosynthesis with efficient and broadband energy capture. They provide a blueprint toward optimizing these processes in synthetic systems. In photosynthesis, both light capture and reactivity have been optimized by separation into distinct sites. The dominant process by which absorbed sunlight is transferred between these sites is through resonance energy transfer, which is highly efficient over long distances. This work highlights that light capture and energy transfer are crucial steps for the design of highly efficient photocatalysts in the future.&#13;
Chapter 1 describes the relevant structures in natural photosynthesis as inspiration for synthetic approaches, the different mechanisms of energy transfer, and examples of photocatalytic systems that harness such excitation transfer processes to improve performance. Chapter 2 reports the synthesis of a biohybrid photocatalyst inspired by the modular architecture of photosynthetic apparatus which conjugated a photosynthetic light harvesting protein to a transition metal photocatalyst. Spectroscopic investigation found that absorbed photoenergy was efficiently funneled from the light harvester to the photocatalyst. The utility of the biohybrid photocatalyst was demonstrated via an increase in yields for two test reactions, including enabled reactivity at red wavelengths where the photocatalyst alone does not absorb. Chapter 3 establishes the power of incorporating nature’s design into non-natural photoenzymatic catalysis, generalizing the approach to other systems and methodologies. Photoenzymes require high-intensity light to function because of the poor absorption properties of their photoactive intermediate. A conjugate composed of a covalently linked photoenzyme and light antennae separates light capture from catalysis. Spectroscopic characterization of the conjugate showed the presence of efficient energy transfer from the light-harvesting components to the photoenzyme. In the presence of energy transfer, a maximum ~4-fold increase in product yields was observed as well as enabled reactivity. Chapter 4 highlights spectroscopic exploration into emerging molecular catalyst species. Finally, Chapter 5 provides an outlook to the future possibilities of the topics presented herein.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157600</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies on the synthesis of bisindole Aspidosperma alkaloids</title>
<link>https://hdl.handle.net/1721.1/157599</link>
<description>Studies on the synthesis of bisindole Aspidosperma alkaloids
Pinto, Taylor
I. Introduction and Background on Aspidosperma Alkaloids&#13;
A brief overview of monoterpene indole Aspidosperma alkaloids is discussed. The biosynthesis of the characteristic pentacyclic core from tryptamine and secologanin is summarized. Some representative examples of total syntheses of Aspidosperma alkaloids are discussed.  Synthetic strategies for the synthesis of bisindole members of the family are also examined.&#13;
&#13;
II. Total Synthesis of (–)-Voacinol, (–)-Voacandimine C, and related congener, (−)-methylenebisdeoxoapodine&#13;
We describe the first total synthesis of complex aspidosperma alkaloids (–)-voacinol and (–)-voacandimine C via a late-stage C7-methylenation strategy inspired by a biogenetic hypothesis. We envisioned rapid access to these natural alkaloids from a common, symmetrical precursor assembled by methylenation of a D-ring-oxidized variant of the structurally related natural product (–)-deoxoapodine. Chemoselective N9-oxidation of a pentacyclic deoxoapodine precursor enabled the synthesis of the corresponding hexacyclic C8-aminonitrile. Stereocontrolled methylenation of a C8-enamine derivative of deoxoapodine, accessed by ionization of the C8-aminonitrile, afforded a symmetrical dodecacyclic bisaminonitrile as a versatile precursor to these bisindole alkaloids. Final-stage, biosynthesis-inspired, controlled reductive opening of the oxolane substructures of this dodecacyclic intermediate provided a unified approach to (–)-voacinol and (–)-voacandimine C, while direct reduction of the same intermediate afforded the structurally related (–)-methylenebisdeoxoapodine.&#13;
&#13;
III. Progress Toward the Total Synthesis of Voacandimine A&#13;
We describe our work toward the total synthesis of bisindole Aspidosperma alkaloid, voacandimine A. Key features of the synthetic progress include two routes for monomer synthesis, two methods for complex fragment assembly to form the bisindole structure, and strategies to address the stereochemistry of the ring fusion.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157599</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetry and its Signatures in Quantum Many-Body Dynamics</title>
<link>https://hdl.handle.net/1721.1/157598</link>
<description>Symmetry and its Signatures in Quantum Many-Body Dynamics
Ogunnaike, Olumakinde
Symmetry has long been a defining feature in our understanding of statistical or manybody systems. By making appeals to universal properties associated with global symmetries and topology, one may describe universal properties of “typical” states and dynamics in equilibrium, even when keeping track of the precise dynamics of a particular many-body system is impossible. This challenge of tracking allowable states and dynamical transitions is only exacerbated for non-equilibrium systems, where one cannot rely on the same notions of typicality. Further, when driven out of equilibrium by external interactions, quantum orders constructed from highly sensitive correlations between states are liable to vanish. Despite these conceptual and practical difficulties, the rise of quantum technologies and accompanying theoretical developments has motivated a surge of interest in dynamical quantum phenomena. The recent developments in the field of quantum many-body dynamics provide satisfactory accounts of many interesting phenomena, including failures of the Eigenstate Thermalization Hypothesis, various dynamical and mixed-state phases of matter, and measurement-induced dynamics and phase transitions. Many of these results are explained for specific systems or within different conceptual frameworks, however these results rarely generalize. In this thesis, I attempt to unify many aspects of quantum many-body dynamics under the same conceptual framework through an investigation of the universal signatures of symmetry in quantum dynamical systems. This is accomplished via a mapping between the averaged dynamics and the low-energy spectrum of an effective Hamiltonian in a “doubled Hilbert space,” comprised of two copies of the original space. This provides a general and versatile framework to qualitatively understand both familiar and novel universal properties of dynamical phenomena like charge diffusion, sub(super)-diffusion of multipole moments in systems with short and long-range interactions, charge and multipole, and even measurement-induced phase transitions. By expanding into a doubled Hilbert space, one may capture the subtleties of non-equilibrium physics, and particularly dynamical phases, within the framework of equilibrium physics and phases. In this work, we examine how to understand various symmetry-constrained dynamical phases and phase transitions using through a dual description of symmetry-constrained equilibrium phases and symmetry-breaking transitions in an enlarged Hilbert space.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157598</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Intersection of Physics Modeling and Representation Learning</title>
<link>https://hdl.handle.net/1721.1/157597</link>
<description>Exploring the Intersection of Physics Modeling and Representation Learning
Kitouni, Ouail
Representation Learning has evolved into a multi-purpose tool capable of solving arbitrary problems provided enough data. This thesis focuses on two primary directions: (1) Harnessing the power of deep learning for applications in fundamental physics and (2) using physicsinspired tools to improve and shed some light on otherwise large-scale, inscrutable black-box algorithms. We explore a collection of applications that improve different aspects of nuclear and particle physics research across its many stages, from online data selection to offline data analysis. We also tease out how deep learning can open up entirely new avenues of research through the lens of mechanistic interpretability to (re)derive fundamental theory as well as new ways to reinterpret physics measurements. Lastly, we study how physics tools can be useful to better understand the dynamics of deep learning as well as provide a solid foundation for algorithms and training paradigms that expand the frontier of machine learning.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157597</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein spatiotemporal dynamics in gene regulation and disease pathology</title>
<link>https://hdl.handle.net/1721.1/157596</link>
<description>Protein spatiotemporal dynamics in gene regulation and disease pathology
Zheng, Ming
A cell orchestrates billions of proteins to the right place at the right time to perform diverse cellular processes. Over the decades, this field has been evolving by integrating advances in microscopy, biochemistry, and molecular biology to unravel the intricate mechanisms governing protein spatiotemporal dynamics as well as the functional consequences. This thesis focuses on the physical motions of proteins at a length scale of tens of nanometers to several microns, where the apparent diffusion and the condensate dynamics of assembly and disassembly are specifically studied. In the studies presented in this thesis, the functional relevance of protein motion is exemplified in the context of gene regulation and disease pathology. We find that the apparent diffusion of transcription factors (TFs) is preferentially partitioned into slowly diffusing states by interacting with RNA, leading to enhanced chromatin occupancy and gene expression (Oksuz et al., 2023). The assembly and disassembly dynamics of transcriptional condensates are coupled to the active RNA synthesis, linking gene expression and the spatiotemporal organization of transcriptional proteins in a feedback loop (Henninger et al., 2021). In addition to transcriptional proteins, we find insulin receptors (IRs) are incorporated in dynamic condensates in normal cells to perform metabolic signaling transduction. In insulin-resistant cells which could occur in chronic diseases such as type 2 diabetes (T2D), IR signaling is dysregulated, associated with diminished IR condensate dynamics of assembly and disassembly (Dall’Agnese et al., 2022). Furthermore, pathogenic signaling reduces the mobility of key proteins–both inside and outside of condensates—that act in many cellular functions. Such reduced protein mobility under diverse pathogenic stimuli, termed proteolethargy, may account for diverse cellular dysregulation seen in chronic disease (Dall’Agnese, Zheng, Moreno et al., 2024).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157596</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Reconstruction Techniques for CUORE: Searching Beyond the Standard Model with Cryogenic Calorimeters</title>
<link>https://hdl.handle.net/1721.1/157595</link>
<description>Advanced Reconstruction Techniques for CUORE: Searching Beyond the Standard Model with Cryogenic Calorimeters
Mayer, Daniel W.
Located within the Laboratori Nazionali del Gran Sasso (LNGS), the Cryogenic Underground Observatory for Rare Events (CUORE) is an experiment primarily searching for neutrinoless double beta decay in ¹³⁰Te. It is the largest operating sub-kelvin cryogenic detector array, instrumenting 988 TeO₂ detector channels at temperatures below 20 mK. CUORE uses the cryogenic calorimeter technique, resolving the thermal signatures from nuclear/particle interactions within crystal absorbers for precise determination of deposited energy. This work establishes methods and analysis techniques to treat CUORE as a segmented detector in aggregate, with a focus towards identifying and reconstructing track-like signatures induced by high-energy through-going particles traversing the detector array. Implementations of such high-multiplicity techniques are used to validate that CUORE can resolve the remaining underground flux of muons within LNGS. This result demonstrates CUORE’s unprecedented size and acceptance as compared to previous cryogenic calorimeter arrays, and has applications towards future searches for neutrinoless double beta decay for which muon-induced backgrounds are non-negligible. Additionally, these methods open up new avenues for CUORE to search for exotic beyond-the-Standard Model particles and interactions, such as particles with fractional electric charge. If realized in nature, fractionally charged particles (FCPs) could be present within the underground flux of cosmic radiation and would leave faint track-like signatures across the detector. We report on a search for FCPs using the first tonne-year of CUORE’s exposure, finding no excess of FCP track candidates over background, and setting leading limits at 90% C.L. on the possible underground flux of FCPs with charges between 1/24 − 1/5 that of an electron. Lastly, we introduce differentiable programming methods for the end-to-end training of neural ordinary differential equations to model thermal pulse dynamics within CUORE calorimeter channels. These methods and results improve understanding of detector response, enable improved in situ background characterization, and open novel opportunities for CUORE and future tonne-scale cryogenic calorimeters to search for physics beyond the Standard Model.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157595</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Control for for Visually Interactive Decision Support Tools in Supply Chain Management</title>
<link>https://hdl.handle.net/1721.1/157594</link>
<description>Natural Language Control for for Visually Interactive Decision Support Tools in Supply Chain Management
Guter, Willem J.
Supply chains are complex networks where changing one variable can have unforeseen&#13;
effects on the entire chain. Interactive supply chain visualizations are useful for understanding these effects, and can lead to decreased cost. However, these interactive visualizations&#13;
can require technical and domain expertise to operate and understand. A solution for this&#13;
is natural language interfaces, allowing users to use natural language commands to control&#13;
the visualization. Additionally, natural language interfaces can be difficult to implement,&#13;
and require applications specific programming or training. This thesis proposes integrating&#13;
a pre-trained large language model as the natural language interface. An example application is created using an existing supply chain network visualization application. Various&#13;
large language models are then evaluated for usability, functionality, and accuracy. We find&#13;
that a state of the art commercial model is able to practically fulfill the role of a natural&#13;
language interface, but that open-source large language models are not currently capable of&#13;
functioning in this way.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157594</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Search for High-Frequency Gravitational Waves with a Modified Axion Detector</title>
<link>https://hdl.handle.net/1721.1/157593</link>
<description>The Search for High-Frequency Gravitational Waves with a Modified Axion Detector
Pappas, Kaliroë Mabelle West
ABRACADABRA-10cm has had great success as a lumped-element axion dark matter pathfinder experiment, with two published axion searches and an extensive background investigation. Now, using the electrodynamics of gravitational waves and a simple change of pickup structures, we are using the ABRACADABRA detector to search for high-frequency gravitational waves in the kHz to MHz range. These higher frequencies may indicate signs of in-spiraling primordial black holes, or other beyond the standard model phenomena. With careful calibration used to distinguish between the two signals, we introduce the first simultaneous search for both axions and gravitational waves using a lumped-element axion detector. In this thesis I will present on the high-frequency cryogenic ABRACADABRA-10cm detector, the background investigations of the detector and the design and first data from the ABRACADABRA-10cm high-frequency gravitational wave search.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157593</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radioactive Atoms and Molecules for Fundamental Physics</title>
<link>https://hdl.handle.net/1721.1/157592</link>
<description>Radioactive Atoms and Molecules for Fundamental Physics
Udrescu, Silviu-Marian
The Standard Model (SM) of particle physics and the theory of General Relativity represent two of the greatest achievements in physics in the past century. However, despite their success, many experimental observations remain unanswered: What is the nature of Dark Matter and Dark Energy? Why is there so little antimatter in the Universe? Why is gravity so weak compared to the other fundamental forces? These questions point to the existence of new phenomena waiting to be discovered. High-precision laser spectroscopy experiments using atoms and molecules emerged as a fruitful approach for searching for new physics effects. Recently, atoms and molecules containing short-lived radioactive isotopes have been proposed as particularly sensitive laboratories to search for physics beyond the SM, especially at the nuclear level. However, many atoms containing very short-lived isotopes are still out of reach for spectroscopic investigations, while radioactive molecules have been completely inaccessible experimentally until recently.&#13;
&#13;
In this thesis, I will present a series of pioneering experiments aimed at harnessing the power of radioactive atoms and molecules to explore nuclear phenomena, both within and beyond the SM. I will start by describing the first-ever precision laser spectroscopy investigation of a radioactive molecule, radium monofluoride (RaF). I will present measurements of the vibrational, rotational, and hyperfine spectrum of RaF, proving its high sensitivity to minuscule nuclear effects. These experiments allowed the quantification of a feasible laser-cooling scheme for RaF and the observation of the effect of the distribution of nuclear magnetization inside the Ra nucleus on the energy levels of RaF. To our knowledge, this is the first time this effect was observed in a molecule, opening the way for using molecules to benchmark ab initio nuclear theory. Finally, I will present measurements of the ionization potential of RaF, showing its suitability for Rydberg states studies and precise quantum control using external electric fields.&#13;
&#13;
I will then present the theoretical calculations and the status of an experiment aiming to measure hadronic parity violation using single molecular ions inside a Penning trap. The experiment's goal is to use the external magnetic field provided by the trap to fine-tune molecular energy levels of opposite parity close to degeneracy, thus increasing the signal produced by parity violating nuclear properties. The sensitivity to the sought-after signal is expected to be increased by more than twelve orders of magnitude compared to atoms. This amplification will allow the observation of yet-to-be-measured parity violating effects in a molecule. These measurements will be critical to guide our understanding of electroweak nuclear phenomena.&#13;
&#13;
Finally, I will show preliminary results obtained from a novel experiment with the goal of enabling laser spectroscopy studies of atoms and molecules containing radioactive nuclei with lifetimes of 1 ms and below. Such isotopes can't be currently studied spectroscopically. Using an event-by-event Doppler reconstruction, our approach could overcome most of the challenges encountered by state-of-the-art experimental techniques, allowing us to extend our reach toward unexplored regions of the nuclear chart. Such short-lived isotopes are of great importance for our microscopic understanding of nuclei as well as for constraining the properties of nuclear matter.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157592</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Fine-Tuning of Language Models for Multiple-Choice Questions</title>
<link>https://hdl.handle.net/1721.1/157591</link>
<description>Investigating Fine-Tuning of Language Models for Multiple-Choice Questions
Wang, Ivy A.
This thesis investigates the positional and contextual bias of large language models (LLMs) when used to answer multiple-choice questions (MCQs). Given the increasing use of generative language models in fields ranging from cybersecurity to biomedical research, it is important to understand the causes of their behavior in order to mitigate biases and prevent errors. One known method of improving the performance of LLMs is fine-tuning, wherein a model is additionally trained on data from a specified distribution or subject area. We specifically investigate training data properties related to positional bias in fine-tuned language model performance on correctly answering MCQs. To improve model efficiency, we used parameter-efficient fine-tuning, specifically LoRA (Low-Rank Adaptation), which reduces the dimensionality of weight matrices used in the model’s layers. We verify that if the training data for the model possesses the same qualities and distributions as the test data, the LLM will achieve the best performance. In our experiments, we scaled and balanced our fine-tuning datasets and learned that both processes improve the accuracy on test sets of MCQs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157591</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Dynamics and Anisotropy in 2D Metal Organochalcogenolate Semiconductors</title>
<link>https://hdl.handle.net/1721.1/157590</link>
<description>Exciton Dynamics and Anisotropy in 2D Metal Organochalcogenolate Semiconductors
Lee, Woo Seok
Silver phenylselenolate (AgSePh) is a novel hybrid organic-inorganic two-dimensional (2D) semiconductor that belongs to the broader class of metal organochalcogenolates (MOCs). Since its blue-emitting excitonic properties were discovered in 2018, AgSePh has attracted attention from the scientific community. From a fundamental science perspective, AgSePh provides an excellent platform for exploring many-body interactions among quasiparticles (such as excitons, phonons, and photons) due to its large exciton binding energy, strong exciton-lattice interactions, and natural photonic cavity structure. From a technological standpoint, its narrow blue emission, a tunable bandgap through composition control, chemical robustness, in-plane anisotropy, and low-cost, scalable synthetic methods make AgSePh promising candidate for photonic and optoelectronic applications. However, we do not yet fully understand how its excitonic properties arise at a fundamental level. The central aim of this thesis is to elucidate the correlation between structure, inorganic composition, organic ligands, and excitonic properties in these novel hybrid 2D semiconductors. First, we present the synthesis, structural and optical properties of 2D AgEPh (E = S, Se, Te) single crystals, colloidal nanocrystals, and thin films. Importantly, the growth of millimeter-sized single crystalline 2D AgEPh (E = S, Se, Te) enables their crystal structure determination via single crystal X-ray diffraction: AgSPh in P2₁, AgSePh in P2₁/c, and AgTePh in P2₁/c. Second, we explore the underlying mechanism of light emission in AgSePh and AgTePh. Despite having the same crystal structure, these compounds exhibit strikingly different excitonic properties: AgSePh shows narrow photoluminescence (PL) with a minimal Stokes shift, while AgTePh exhibits broad PL with a large Stokes shift. Using time-resolved and temperature dependent optical spectroscopy, combined with sub-gap photoexcitation studies, we demonstrate that the exciton dynamics in AgSePh films are dominated by the interaction of free-excitons with extrinsic defect states, whereas the dynamics in AgTePh are dominated by intrinsic exciton selftrapping behavior. Third, we study alloying between AgEPh. we demonstrate that AgSePh and AgTePh form homogeneous alloys with tunable excitonic properties across all compositions, whereas AgSPh and AgSePh/AgTePh exhibit a miscibility gap. These observations are elucidated by density functional theory calculations and correlated with crystallographic information. Fourth, using polarization-resolved micro-absorption, reflectance, and photoluminescence spectroscopy, combined with the GW plus Bethe-Salpeter equation calculations, we reveal multiple low-lying excitons with in-plane anisotropy in AgSePh and AgTePh. This showcases the richness of excitonic physics in these materials, which arises from their low-symmetry crystal structures. Finally, we show that the electronic and excitonic structure of AgSePh can be engineered through organic functionalization, resulting in giant excitonic anisotropy and a completely different absorption spectrum in 2D AgSePh-F₂(2,3). This divergence in excitonic properties is attributed to the semi 1D Ag chains in AgSePh-F₂(2,3), in contrast to hexagonal 2D Ag network in AgSePh. This finding can be generalized to other blue-emitting 2D AgSePh-R compounds which exhibit either AgSePh-like or AgSePh-F₂(2,3)-like absorption spectra. Overall, this thesis advances the understanding of the structure-composition-excitonic property relationships in these emerging hybrid semiconductors, paving the way for future investigations into this exciting material family.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157590</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Research to Search: Technologies and Techniques of Legal Research, 1880-1980</title>
<link>https://hdl.handle.net/1721.1/157589</link>
<description>From Research to Search: Technologies and Techniques of Legal Research, 1880-1980
Reiss Sorokin, Alex
In 1964, the Ohio State Bar Association (OSBA) embarked on a project to harness computer technology to automate legal research. After three years of investigation, it established the Ohio Bar Automated Research (OBAR) organization and contracted a local computer company, Data Corporation, to develop an electronic legal research service. Despite initial skepticism and mounting costs, these lawyers and technologists managed to launch a working service by 1969. The service – also named OBAR – was available through remote consoles placed in law firms, libraries, and government offices. By 1973, an improved system was relaunched as Lexis, a soon-to-be-national legal information retrieval service. Lexis went on to become a staple of American legal practice while OBAR gradually faded out of the picture. This dissertation tells the story of the OBAR system and its promise of automating legal research. &#13;
What did it take for lawyers to begin using and trusting computer technology for their work? I argue that the automation of legal research required both conceptual and material rearrangement. Legal research was a deeply social activity supported by an intricate infrastructure of people, technologies, and techniques. To be trusted and used, the computer had to be constantly charged with meanings, often contradictory ones. It was presented as a tool that would be integrated into an existing legal research process and a technology that would overhaul legal research. The computer was attributed mechanical qualities, like being objective or operating according to instructions, and human ones, like being sophisticated and capable of conversation. These contradictory meanings, along with the gap between promise and reality, were constantly sewn together as part of the computerized system’s development and marketing process. &#13;
To capture the process of automation, this dissertation traces legal research practices before the computer, the development process of the new technology, and the competing notions of trust and credibility in its early years. The first section traces the splintering of legal research into a distinct task that could be taught, delegated, and automated. In the first chapter, I focus on print legal research technologies and legal research instruction through the first half of the 20th century. I show that innovations in legal research went hand-in-hand with a reallocation of legal work among lawyers and non-legal staff. Examining legal research manuals shows that instruction in types of law book gradually gave way to a more systematic approach to legal research. The second chapter considers the history of legal research work through an examination of the law office and the distribution of labor within it. It shows that the development of legal research into a distinct task that could be delegated was intertwined with social, professional, and technological developments at mid-century. The third chapter describes how the specter of automation focused bar associations’ attention on legal research practices. It shows that legal research fit into a social and professional setting. Lawyers relied on an array of technologies and personnel to produce answers to legal questions. As a whole, the section argues that three factors joined to make legal research into a distinct task, thus making its automation possible: the development of instructional materials and courses on legal research, the growth and bureaucratization of law firms, and the introduction of women and machinery into the law office in the 20th century.&#13;
Two chapters and two short excursuses make up the second section, which focuses on the development and early adoption of the OBAR system. In chapter four, I examine the entanglement of technological choices and ideals in the process of developing the OBAR system in the 1960s. I show that the focus on direct use by lawyers was meant to cast suspicion on human judgment while touting the computer as an objective and trustworthy tool. Excursus one unpacks OBAR’s promise of an interactive system. It shows that at the same time as the system was likened to human dialogue, it offered a substantially different interaction with court cases, a process that altered the epistemic and social setting of legal research. Chapter five considers the reactions of OBAR’s early users as communication consoles were placed in law firms and libraries across Ohio in the 1970s. Relying on call reports and correspondence, I examine controversies around the system’s accuracy and credibility. Excursus two tells the story of what came out of the system’s promise in light of later developments. Focusing on the chasm that developed between lawyers and technologists in defining the system in the 1970s, it explains how an approach that focused on the system as a product prevailed over an approach that viewed the system as a service to the profession. To become a successful national product, Lexis had to shed its connections to the organized bar and give up any social aspirations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157589</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent Terahertz Control and Ultrafast Spectroscopy of Layered Antiferromagnets</title>
<link>https://hdl.handle.net/1721.1/157588</link>
<description>Coherent Terahertz Control and Ultrafast Spectroscopy of Layered Antiferromagnets
Ilyas, Batyr
The central theme of modern condensed matter physics is to understand the emergent phenomena arising from interactions among Avogadro’s number of particles in quantum materials, alongside efforts to control their properties. While powerful transport, thermodynamic, and spectroscopic tools have been developed, they often fall short to reveal the intricate interplay among electronic, spin, orbital, and lattice degrees of freedom. A promising approach involves selectively perturbing one degree of freedom while observing responses in others, made possible by ultrafast lasers with femtosecond time resolution. These advancements not only showcase the capability of ultrafast experiments in understanding complex material properties but also demonstrate the manipulation of ordered phases at ultrafast timescales, thereby opening a laboratory for studying materials in nonequilibrium regime. This dissertation contributes to the ongoing effort of developing new ultrafast spectroscopy tools, utilizing them to probe lattice, magnetic, and electronic properties, and gaining active control over them. Specifically, it investigates the induction of a new magnetic state with net magnetization using intense low-energy terahertz (THz) pulses in the van der Waals antiferromagnet FePS₃. Critical fluctuations near the phase transition are found to enhance both the magnitude and the lifetime of this new state. Additionally, a broadband two-dimensional (2D) THz spectroscopy technique is developed and employed to study interactions among low-energy collective excitations and to directly identify phonons that induce the new magnetic phase. Furthermore, time-resolved spectroscopy in the visible and near-infrared spectral range is utilized to detect a bound state between phonon and electronic states in the sister compound NiPS₃, and to capture a magnetostriction effect in FePS₃ using coherent phonon spectroscopy, that was elusive to conventional diffraction experiments. Finally, second harmonic generation spectroscopy with microscale spatial resolution, is employed to study the multiferroic material NiI₂, demonstrating its persistence down to the single atomic layer — a first of its kind. These findings and tools can potentially be extended to frustrated quantum magnets to control their magnetic phases and potentially detect their collective modes. The 2D nonlinear spectroscopy utilized in this dissertation is gaining attention both theoretically and experimentally as a promising tool for detecting fractionalized spin excitations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157588</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonizing Long-haul Trucking</title>
<link>https://hdl.handle.net/1721.1/157587</link>
<description>Decarbonizing Long-haul Trucking
Jones, Robert
As climate change poses an ever-increasing challenge for the world, the transportation sector has experienced significant troubles in mitigating its carbon dioxide emissions. Particularly responsible for this development is the heavy-duty trucking sector. Heavy-duty freight trucks are responsible for approximately 30 % of the highway transportation emissions even though they only represent about 5.5 % of vehicles on the road. Heavy-duty trucks are also the backbone of US freight, as they account for 71 % of freight delivered to the American people. The corresponding road freight energy consumption has been consistently increasing over the last decades and is expected to grow even further in the future. Emissions must be drastically reduced in order to adhere to the proposed targets of the 2015 Paris Agreement and to limit global warming. This contrast raises a crucial question: how can road freight emissions be substantially reduced while at the same time facing a growing transportation demand? Especially for long-haul class 8 trucks, this question is difficult to answer. This study seeks to elucidate potential competitive powertrain and fuel combinations and eliminate other poor alternative options.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157587</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Actuated Poppet Hydrogen Peroxide Reactor</title>
<link>https://hdl.handle.net/1721.1/157586</link>
<description>Actuated Poppet Hydrogen Peroxide Reactor
Kirkman, Josef X.
This thesis discusses the design and characteristics of an actuated poppet valve hydrogen peroxide and silver catalyst reactor that can be used to generate compressed gas to use for self-powered robotic systems. The reactor’s poppet valve conceals and reveals the silver catalyst to control the reaction. The sealing performance of the valve at working pressures are crucial to the accurate control of the reactor itself. This thesis discusses various design variations of the sealing poppet head and how they can improve performance given various design goals.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157586</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the physics of intranuclear organization</title>
<link>https://hdl.handle.net/1721.1/157585</link>
<description>On the physics of intranuclear organization
Sood, Amogh
Eukaryotic nuclei, despite their diverse and crowded chemical milieu, can achieve precise spatiotemporal organization of their contents and chemistry, despite lacking access to membrane-bound organelles. It has recently become apparent that the cells accomplish this feat by leveraging physical processes such as liquid-liquid phase separation driven by multivalent macromolecular interactions to form biomolecular condensates which can serve as membrane-less organelles for the precise, vectorial organization of intranuclear contents. In particular, the hierarchical and functional packaging of DNA into chromatin is mediated by phase separation. Epigenetic modifications of histone proteins, which DNA wraps around to form nucleosomes, are key determinants of nucleosomes’ condensability and chromatin’s higher-order structure. Chromatin structure, by regulating access of transcriptional machinery to the genome, in turn, has broad implications for cellular processes such as gene regulation and cellular differentiation. Furthermore, there exists a bi-directional feedback between 1D epigenomic sequence and 3D chromatin structure as the former is spread and maintained by enzymes that have a “reader-writer” functionality that allows them to similarly modify nucleosomes close to each other in sequence but not necessarily in space. Recent advances suggest chromatin has the properties of a viscoelastic network and exhibits non-trivial dynamics. Therefore, the dynamics of chromatin structure and the spread and maintenance of epigenetic marks are intimately and inextricably linked yet poorly understood. Part I of this thesis is devoted to understanding the complex interplay between chromatin structural dynamics and stochastic reaction networks describing histone modifications. Furthermore, given the prominent role phase separation plays in intranuclear organization, we devote Part II of this thesis to study the impact of competition between specific and non-specific interactions on liquid-liquid phase separation coupled to percolation and thereby attempt to elucidate the molecular grammar of phase separating biomolecules and evolutionary pressures that shape them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157585</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical and Analytical Methods in Low-Dimensional Strongly Correlated Quantum Systems</title>
<link>https://hdl.handle.net/1721.1/157584</link>
<description>Numerical and Analytical Methods in Low-Dimensional Strongly Correlated Quantum Systems
Peng, Changnan
The study of low-dimensional strongly correlated quantum systems lies at the intersection of intricate theoretical models and practical numerical methods, offering deep insights into condensed matter physics. This thesis explores the application of various numerical and analytical methods to these systems. It addresses universal behaviors and phase transitions, exemplified by the phenomenon of multiversality. Specifically, the transition from a 1D Luttinger liquid to a charge density wave insulator, characterized by partly Kosterlitz-Thouless transition and partly Ising transition, is analyzed using both analytical renormalization group calculations and numerical density matrix renormalization group simulations. Additionally, the thesis introduces a statistical smoothing spline method to pinpoint transition points systematically. The work extends to quantum dynamics, presenting a generic theoretical framework for analyzing quantum-classical adiabatic dynamics with learning algorithms. A provably efficient adiabatic learning (PEAL) algorithm with favorable scaling properties is developed. The algorithm is numerically validated on the 1D Holstein model, demonstrating its precision in predicting dynamics. Furthermore, the thesis derives a Hamiltonian lattice formulation for the 2+1D compact Maxwell-Chern-Simons theory, providing an analytical solution that aligns with continuum theories and facilitating future numerical applications. Through these explorations, the thesis underscores the complementary roles of numerical and analytical methods in advancing the understanding of complex quantum systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157584</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Radiation Shielding Design and Radioactive Waste Assessment of Horizontal Compact High Temperature Gas-Cooled Reactor</title>
<link>https://hdl.handle.net/1721.1/157583</link>
<description>Radiation Shielding Design and Radioactive Waste Assessment of Horizontal Compact High Temperature Gas-Cooled Reactor
Kudriavtseva, Anna
With the objective that nuclear power plants utilizing small High Temperature Gas-Cooled Reactors (HTGRs) can provide economic, environmentally favorable and reliable electricity and heat for community and industrial purposes, Boston Atomics LLC initiated the design of Horizontal Compact HTGR (HC-HTGR). This work addresses shielding, activation analysis and the decommissioning cost assessment as an integrated part of the design process.&#13;
Reinforced regular and borated concrete were considered as shielding materials for the reactor building and Reactor Cavity Cooling System (RCCS) tanks. It was found that for locations of the reactor building where the dose rates during normal operation were greater than the Nuclear Regulatory Commission (NRC) limit of 0.1 rem/hr, 175 cm of borated concrete is required. The shielding concerns motivated the decision to separate RCCS tanks from the reactor room with a 75 cm borated concrete wall to ensure that the radiation levels do not exceed the NRC limit. Additionally, several shielding options were proposed to protect steam generator modules from radiation-induced activation.&#13;
The activation analysis was performed for the key equipment and graphite reflector components of the HC-HTGR design. The core barrel made of Incoloy 800H was characterized as a class C waste component after 40 years of reactor operation. It was proposed that 2.25Cr-1Mo alloy be considered as barrel material to decrease activity levels. The reactor pressure vessel (RPV) and RCCS tubes made of carbon steel were characterized as a class A waste component. The graphite reflector components are characterized as Class C level waste.&#13;
Furthermore, this work discusses the neutron irradiation effects and their impact on the integrity of the barrel, RPV, and graphite reflector against material property changes. It was found that 2.25Cr-1Mo alloy has a higher radiation resistance due to the higher iron content in the composition. Based on the results, the reactor vessel is safe from radiation damage for 32 years of operation. The data evaluated for the graphite reflectors indicate that the components should be replaced after 20 years before they pass the turnaround point. &#13;
The concentrations of radionuclides computed during activation analysis were used to predict the radiation levels from beta and gamma sources that could be encountered during the disposal of the core barrel and RPV. Based on the obtained data, it is clear that if the barrel is not replaced during operation, the radiation dose rate will remain above acceptable levels, requiring a more rigorous disposal approach. The radiation levels are reduced for the reactor vessel as it was exposed to a lower flux and radiation-induced activation. A similar analysis was performed to derive the exposure dose rate from gamma and beta rays that can be detected by a sensor of a refueling camera. Beta particles will deposit most of the energy in a graphite layer, and the camera will register negligible dose rates. The gamma ray estimates indicate that a more enduring refueling machine is required. &#13;
The results of this work provide the disposal costs for HC-HTGR immediate dismantlement and after a given decay period. Overall, the disposal costs of core barrel, RPV and graphite reflector are $13 million for HC-HTGR design after 40 years of full operation if the billable charge limits are set on radioactivity levels. If this option is not considered, the total disposal costs grow up to $225 million. However, extending the storage up to 10 years would decrease the activity, reducing the cost of disposal.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157583</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonizing the US Power Sector</title>
<link>https://hdl.handle.net/1721.1/157582</link>
<description>Decarbonizing the US Power Sector
Farnsworth, Amanda
As the second highest national emitter, the US has the opportunity, and responsibility, to reduce emissions and mitigate the impacts of climate change. The power sector has been identified as the linchpin in our national decarbonization strategy, with high electrification goals for the other sectors. As of 2022, the power sector was responsible for more than a quarter of annual emissions. As electrification increases, the importance of decreasing the emissions and emissions intensity of electricity production grows. This thesis explores the challenges and opportunities of decarbonizing the US power sector. Two models were built to complete this analysis: Ideal Grid (IG) which is a greenfield capacity expansion and economic dispatch model, and Evolving Grid (EG), which is a brownfield capacity expansion and economic dispatch model. These models are an especially novel addition to the current arsenal of publicly available capacity expansion models because they include embodied emissions, in addition to the industry-standard consideration of power plant tailpipe emissions from fossil fuel combustion. Nine regions of the contiguous US are represented in these models.  First, IG is used to highlight regional decarbonization challenges. Regions with significant land available for variable renewable energy (VRE) buildout and strong wind resources had the cheapest paths to a clean grid. Also, hydropower resources play a significant role. At deep decarbonization levels, the need for long-duration energy storage (like pumped hydropower storage) increases. The role of embodied emissions is explored, showing that as fossil-fuel consumption decreases and VRE penetration increases, they become nonnegligible. To most effectively reduce system emissions, embodied emissions should be accounted for. Next, fusion is integrated into the model to demonstrate its potential role. Assuming a $8,500/kW CAPEX, fusion is not economically competitive unless a carbon constraint is applied. But, at deep decarbonization levels, fusion is prominent in all regions. EG shows that intermediary decarbonization goals before 2050 play a pivotal role in determining fusion adoption and overall fleet composition. Lastly, the versatility and value of presented models is demonstrated by outlining other potential applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157582</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The structure of hadrons and other potential phases of QCD</title>
<link>https://hdl.handle.net/1721.1/157581</link>
<description>The structure of hadrons and other potential phases of QCD
Schindler, Stella T.
Quantum chromodynamics (QCD) is a mathematical theory describing subatomic particles called quarks and gluons, and the strong force that binds them together into protons and neutrons. This thesis centers on two major thrusts of modern QCD research: (1) uncovering the inner quark and gluon structure of the proton, and (2) mapping out other phases of matter that quarks and gluons form as we vary pressure and temperature. To study these topics, we develop, utilize, and synergize tools in quantum field theory (analytics), lattice gauge theory (numerics), and phenomenology (comparing theory to experiment). Specifically, we use new and existing techniques to access precision information about the inner structure of the proton, via the study of transverse momentum distributions, energy correlators, and diffractive processes at colliders. Additionally, we develop new analytic and numerical techniques for studying QCD phase structure inspired by non-Hermitian physics, and probe the possibility of new exotic phases near the QCD phase transition.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157581</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonreciprocal phenomena in superconductivity</title>
<link>https://hdl.handle.net/1721.1/157580</link>
<description>Nonreciprocal phenomena in superconductivity
Davydova, Marharyta
This thesis introduces and studies several unusual phenomena that arise in low-dimensional systems in the presence of a magnetic field.  The first example that we discuss is nonreciprocal superconductivity, which occurs upon simultaneous breaking of inversion and time-reversal symmetries.  Nonreciprocal superconductors describe certain classes of unconventional superconductors that include certain kinds of mixed-pairing and finite-momentum ones. They also occur in engineered systems exhibiting s-wave pairing-based superconductivity, for which we put forward several simple proposals. We demonstrate several striking observable consequences of nonreciprocal superconductivity. These include current rectification in normal metal-nonreciprocal superconductor junctions and the Josephson diode effect, for which we propose a simple and universally applicable mechanism. With the advent of novel low-dimensional symmetry-breaking materials, such as multilayer graphenes and twisted cuprates, as well as modern experimental possibilities involving engineered systems,  nonreciprocal phenomena could eventually become an indispensable tool for revealing the nature of superconducting orders.&#13;
&#13;
The second part of this thesis concerns doped Mott insulators in a magnetic field, described by a triangular-lattice Fermi-Hubbard model in the limit of strong interaction. This is relevant for many novel materials, such as moiré transition metal dichalcogenides bilayers. We predict a new bound state, spin polaron, formed by binding a doped hole with a magnon (spin flip). Spin polarons have large effective mass and are spin 3/2 quasiparticles. The mechanism for their formation is kinetic frustration, and therefore their binding energy is proportional to the hopping t, which is the largest energy scale within a single Hubbard band. We then propose a new phase diagram for the triangular lattice Hubbard model in a magnetic field as well as multiple experimental signatures. We hope that the prediction of the spin polaron, which has since been experimentally confirmed, will give rise to novel mechanisms for superconductivity and correlated orders in doped Hubbard models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157580</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Dark Matter Halos through the Lens of Machine Learning</title>
<link>https://hdl.handle.net/1721.1/157579</link>
<description>Decoding Dark Matter Halos through the Lens of Machine Learning
Nguyen, Tri V.
Dark matter (DM) constitutes about 85% of the matter in the Universe, yet its particle nature remains one of the greatest outstanding questions in astrophysics. DM halos act as the scaffolding within which galaxies form, but the specific mechanisms through which they influence galaxy evolution are not fully understood, especially at galactic scales. While cosmological simulations and astrophysical surveys have made significant strides in constraining DM properties, upcoming surveys will generate terabytes of complex, high-dimensional data. It is thus imperative to develop new methodologies capable of interpreting and linking this data with theoretical models. Machine learning techniques, coupled with advancements in cosmological simulations, present a transformative opportunity. In this thesis, I conduct a multi-scale investigation into the nature of DM and its role in shaping galaxies by integrating advanced machine-learning techniques with cutting-edge cosmological simulations. First, I employ simulation-based inference and graph neural networks to infer the mass density profiles of DM halos in dwarf galaxies from their stellar kinematics. Next, I develop a generative model using normalizing flows and recurrent neural networks to reconstruct the mass assembly histories of DM halos in cosmological simulations. Furthermore, I utilize variational diffusion models and Transformer-based neural networks to perform point-cloud modeling of satellite populations under alternative DM models. Finally, I create synthetic surveys for the Gaia surveys from Milky Way-like simulations, bridging the gap between simulations and observations. This thesis demonstrates the transformative potential of machine learning techniques to probe the DM properties and galaxy formation. The methodologies developed herein provide new avenues for interpreting vast and complex astronomical datasets and offer insights that could lead to a deeper understanding of the fundamental nature of DM.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157579</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring New Frontiers in High Energy Physics: Boosted Resonances Decaying To Quarks, Foundation Models, and Heterogeneous Computing at the CMS Experiment</title>
<link>https://hdl.handle.net/1721.1/157578</link>
<description>Exploring New Frontiers in High Energy Physics: Boosted Resonances Decaying To Quarks, Foundation Models, and Heterogeneous Computing at the CMS Experiment
Krupa, Jeffrey
In this thesis, we introduce machine learning (ML) tools to optimize data taking and analysis at data-intensive scientific experiments, focusing on the CMS experiment at the Large Hadron Collider (LHC). A path to a foundation model for LHC physics is described, where self-supervised learning is enabled through the re-simulation of decaying partons. The first experiments with remote operation of GPUs in LHC experiments are presented. These tools will help equip experiments at the High-Luminosity LHC (HL-LHC) to perform precision measurements and searches for new physics, for example, low mass resonances decaying to quarks. In this context, a search for narrow resonances decaying into quarkantiquark pairs produced with high transverse momentum is presented. The analysis is based on data collected in Run 2 with the CMS detector at the LHC in proton-proton collisions at √ &#119904; = 13 TeV. Resonance candidates are reconstructed as large-radius jets and identified using a state-of-the-art jet tagging algorithm. This analysis presents the most sensitive limits for new spin-1 bosons coupling universally to quarks and spin-0 bosons coupling preferentially to heavier quarks. The invariant jet mass spectrum is probed for a potential narrow peaking signal over a smoothly falling background. Upper limits at 95% confidence level are set on the coupling of narrow resonances to quarks as a function of the resonance mass. For masses between 50 and 300 GeV, these are the most sensitive limits to date on all possible mediators. Using conventions on s-channel dark matter mediators, limits are set on dark photons and dark matter in the context of the relic density.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157578</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopic study of emergent electronic phases in transition metal based compounds</title>
<link>https://hdl.handle.net/1721.1/157577</link>
<description>Spectroscopic study of emergent electronic phases in transition metal based compounds
Song, Qian
Antiferromagnets with non-relativistic spin splitting are outstanding candidates as the next generation of spintronic materials owing to their electron-volt (eV) scale spin splitting, ultrafast spin dynamics and nearly vanishing stray fields. Achieving voltage-based control of spin polarization in antiferromagnets is of great interest for realizing energy-efficient and compact devices for information storage and processing. Spin spiral type-II multiferroics exhibit an inversion-symmetry-breaking antiferromagnetic order which directly induces ferroelectric polarization, allowing for symmetry protected cross-control between spin chirality and polar order. This intrinsic coupling between the magnetic and dipolar order parameters results in record-strength magnetoelectric effects. Two-dimensional materials possessing such intrinsic multiferroic properties have been long sought for harnessing magnetoelectric coupling in nanoelectronic devices. The recent discovery of intrinsic magnetic order in atomically-thin van der Waals (vdW) materials has created new opportunities for the study of collective spin phenomena in free-standing two-dimensional (2D) systems and nanoscale devices. Among possible multiferroic vdW materials, several families have been identified, and of particular promise is the magnetic semiconductor NiI₂. The multiferroic state of NiI₂ is characterized by a proper-screw spin helix with given handedness, which couples to the charge degrees of freedom to produce a chirality-controlled electrical polarization. We use a suite of optical technique which reveal an ordered magnetic, polar state that persists down to the ultrathin limit of monolayer NiI₂.&#13;
&#13;
Recent development of spin-group formalism has identified a new class of magnets with nontrivial spin textures, including even-parity d, g, or i-wave altermagnet and odd-parity p-wave antiferromagnets. The chiral magnetic order in NiI₂ breaks Inversion-Time-Reversal-Translation (P Tτ ) symmetry, and Spin-Rotation-Translation (Uτ ) symmetry, allowing for spin splitting even in the absence of spin-orbit-coupling (SOC). We provide direct evidence that the spin polarization in a spin spiral type-II multiferroic exhibits p-wave (odd-parity) character and directly couples to the spin chirality, enabling electrical control of non-relativistic spin splitting. Our findings represent the first observation of a p-wave antiferromagnet, and open a new frontier of voltage-based switching of non-relativistic spin splitting in vdW antiferromagnets.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157577</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Color: Lattice Gauge Theory for Strongly-Coupled Physics</title>
<link>https://hdl.handle.net/1721.1/157576</link>
<description>Beyond Color: Lattice Gauge Theory for Strongly-Coupled Physics
Oare, Patrick R.
Quantum Chromodynamics (QCD) is the prototypical strongly interacting Quantum Field Theory (QFT). It is the interaction that yields the strong nuclear force that binds protons and neutrons together. The underlying mathematical picture of QCD is known exactly: it is an &#119878;&#119880;(3) gauge theory coupled to six flavors of fermions (the quarks). Despite this, it remains difficult to compute QCD observables because QCD is strongly-coupled, and typical perturbative methods used in QFT only work in specific regimes of validity for QCD. The most successful ab initio method to study QCD is Lattice Gauge Theory (LGT). This computational formalism computes observables by discretizing spacetime to render the path integral tractable. The primary focus of LGT in the 40 years since its inception has been the study of QCD, as the theory has direct physical relevance to so much of our universe, and the desire to understand QCD has driven many conceptual breakthroughs and advancements in LGT. Despite the focus on QCD, lattice methods find significant utility in studying other strongly-coupled gauge theories related to and unrelated to QCD. This thesis will focus on applying LGT to strongly-coupled physics inside and outside of QCD and on developing techniques within LGT that may be used to better understand said theories. First, the spectral function reconstruction problem in LGT is considered, and a new method for spectral function reconstruction in LGT is presented. Spectral functions describe the energy states of a theory: bound states, resonances, and continuum thresholds. The presented reconstruction method uses the analytic properties of the retarded Green’s function to constrain the full set of spectral functions that may be reconstructed from LGT data using the Nevanlinna-Pick interpolation problem. Next, two theories will be numerically studied using LGT. The first is the Standard Model Effective Field Theory (SMEFT). The SMEFT process that is considered is neutrinoless double &#120573; (0&#120584;&#120573;&#120573;) decay, a hypothetical decay of two neutrons into two protons and two electrons. LGT is used to compute non-perturbative matrix elements for the unphysical &#120587;⁻→ &#120587;⁺&#119890;⁻&#119890;⁻ transition, which contributes to nuclear 0&#120584;&#120573;&#120573; decay, and for the decay of the dinucleon &#119899;⁰&#119899;⁰ → &#119901;⁺&#119901;⁺&#119890;⁻&#119890;⁻. Connections to Effective Field Theory studies of 0&#120584;&#120573;&#120573; decay will also be discussed. Finally, adjoint QCD (QCD₂), the theory of a Majorana fermion coupled to a &#119878;&#119880;(&#119873;) gauge field in the adjoint representation in 1+1 spacetime dimensions, will be studied using LGT. QCD₂ is a well-studied QCD-like theory whose properties have been crucial in the study of confinement. Lattice methods are used to compute the static quark potential, string tensions, and the low-lying spectrum of the theory, which will provide input that may be used to understand better QCD₂ and the confinement mechanism in general.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157576</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Sample to Answer: Innovations in sample processing and CRISPR-based diagnostics for enhanced clinical translation and field deployment</title>
<link>https://hdl.handle.net/1721.1/157575</link>
<description>From Sample to Answer: Innovations in sample processing and CRISPR-based diagnostics for enhanced clinical translation and field deployment
Arizti Sanz, Jon
The recent (re)emergence and rapid spread of infectious disease agents underscore the urgent need for effective disease prevention and control strategies. Accurate and timely diagnostics serve as the base for effective disease management, enabling the rapid identification of disease outbreaks, guiding treatment decisions, and informing public health interventions. However, the global diagnostic testing capacity is currently insufficient to effectively respond to emerging infectious disease threats, which has fueled the spread of Lassa, Ebola, SARS-CoV-2, and Zika virus in recent years. Globally, this diagnostic gap is particularly pronounced at the primary care or community level, an essential site for swift and effective response. Widespread, rapid, and user-friendly diagnostic tests are vital components of effective outbreak containment and response strategies, as they enable the rapid identification of new cases, thereby facilitating timely intervention and preventing further pathogen spread. Therefore, addressing the critical need in the global diagnostic testing infrastructure requires the development and deployment of diagnostic tools that are accurate, affordable, and accessible in decentralized settings.&#13;
&#13;
Existing diagnostics fall short in bridging the current diagnostic gap, but recent advances in nucleic acid-based technologies, and CRISPR-based diagnostics (CRISPR-Dx) in particular, have shown significant promise in transforming infectious disease detection. CRISPR-Dx are easily programmable, robust, sensitive, isothermal, and highly specific, but further advances will be required to facilitate their use outside of centralized laboratories. This thesis aims to address this critical gap in global diagnostic testing capacity, focusing on the innovation, validation, and deployment of CRISPR-Dx for infectious diseases. We first developed SHINE, a rapid and sensitive Cas13-based nucleic acid detection platform without the need for nucleic acid extraction. In this first version (SHINEv1), we simplified the CRISPR-Dx workflow, reducing user manipulations and assay time, and enabling automated interpretation of assay results using a companion smartphone application. Next, we made further improvements and thoroughly validated this platform to create SHINEv2, a further streamlined, equipment-free, and easily deployable technology with the ability to discriminate SARS-CoV-2 variants of concern (VOCs). Given the excellent programmability of CRISPR-Dx, we expanded the use of SHINE beyond SARS-CoV-2 to other clinically relevant pathogens. We developed and validated SHINE assays to detect and discriminate species, subtypes, and variants of influenza virus, with important implications for public health and clinical care. We also designed and tested multiplexed diagnostic assays for the detection and differentiation of three tick-borne pathogens in clinical samples. Finally, given the inadequacy of existing sample processing methods – and their importance to nucleic acid test deployment – we developed a high throughput experimental workflow to analyze the effects of chemical reagents on diagnostic assay performance and nuclease activity in patient samples using a commercially available microfluidic platform. Together, the research presented in this thesis contributes to the development of more effective, accessible, and field-deployable diagnostic solutions, thereby enhancing our ability to respond to the global burden of infectious diseases.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157575</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Total Synthesis of Verticillin A and Application of Diazene-Directed Fragment Assembly to the Synthesis of Heterodimeric Epidithiodiketopiperazine Derivatives</title>
<link>https://hdl.handle.net/1721.1/157574</link>
<description>Total Synthesis of Verticillin A and Application of Diazene-Directed Fragment Assembly to the Synthesis of Heterodimeric Epidithiodiketopiperazine Derivatives
Knauss, Walker
I. Total Synthesis of (+)-Verticillin A We report the first total synthesis of (+)-verticillin A, completed in 16 steps. Our initial strategy of late-stage sulfidation on a dimeric substrate produced an undesired diastereomer of ETP. We were able to access an ETP with the desired diastereoselectivity by effecting sulfidation on an epimerized, monomeric substrate. In order to install a disulfide with the desired facial selectivity, we developed a stepwise sequence involving stereoselective formation of a C15-benzhydryl disulfide followed by intramolecular sulfidation at C11. Because ETPs are unstable to carboncentered radicals and irradiation with UV light, we developed conditions to reduce the disulfide and protect the resulting thiols as alkylsulfides prior to cobalt reductive dimerization and photochemical desulfonylation. Finally, deprotection of the thiols and oxidation delivered the ETP natural product (+)-verticillin A. &#13;
&#13;
II. Synthesis of Heterodimeric ETP Derivatives Using Diazene-Directed Fragment Assembly&#13;
&#13;
We report the development of a novel route to heterodimeric ETP derivatives using diazenedirected fragment assembly. This is the first application of diazene-directed coupling to the synthesis of dimeric diketopiperazine alkaloids Our group’s initial route to heterodimeric ETP derivatives relied upon reductive cobalt dimerization, which produces a nearly statistical mixture of homo- and heterodimeric products. In contrast to the initial route, the diazene-based approach disclosed herein enables selective heterodimerization. To demonstrate the utility of heterodimeric ETP derivatives, we have synthesized an ETP-diazirine photoaffinity labelling probe, which we hope can be used to study the interactions of ETPs with cellular targets.; II. Synthesis of Heterodimeric ETP Derivatives Using Diazene-Directed Fragment Assembly&#13;
&#13;
We report the development of a novel route to heterodimeric ETP derivatives using diazenedirected fragment assembly. This is the first application of diazene-directed coupling to the synthesis of dimeric diketopiperazine alkaloids Our group’s initial route to heterodimeric ETP derivatives relied upon reductive cobalt dimerization, which produces a nearly statistical mixture of homo- and heterodimeric products. In contrast to the initial route, the diazene-based approach disclosed herein enables selective heterodimerization. To demonstrate the utility of heterodimeric ETP derivatives, we have synthesized an ETP-diazirine photoaffinity labelling probe, which we hope can be used to study the interactions of ETPs with cellular targets.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157574</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explorations in two dimensional strongly correlated quantum matter: from exactly solvable models to conformal bootstrap</title>
<link>https://hdl.handle.net/1721.1/157573</link>
<description>Explorations in two dimensional strongly correlated quantum matter: from exactly solvable models to conformal bootstrap
Jones, Robert A.
This dissertation presents two projects that touch upon the role of quantum mechanics in classifying phases of matter and their transitions. In the first project, we set out to answer: is it possible to find a lattice model in the Ising universality class that realizes the Kramers Wannier symmetry in such a way that it squares to 1, rather than a lattice translation as in the usual Ising model? Using insights from symmetry-protected topological phases of matter, we answer in the affirmative, with the caveat that the symmetry, beyond being non-onsite, actually acts on a Hilbert space that is not a local tensor product. The second concerns the nature of the Neel-VBS deconfined quantum critical point. This is thought to be described by the noncompact CP¹ model, which we argue to be continuously connected to the theory accessed by the 2 + ε expansion for the O(3) NLSM. To shed light on the nature of the DQCP, we perform conformal bootstrap studies of the O(3) model in 2 &lt; d &lt; 3.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157573</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Squeezing the Quantum Noise of LIGO below the Standard Quantum Limit</title>
<link>https://hdl.handle.net/1721.1/157572</link>
<description>Squeezing the Quantum Noise of LIGO below the Standard Quantum Limit
Jia, Wenxuan
The year 2015 marked the first detection of a gravitational wave signal from a pair of black holes located 410 megaparsecs (1.3 billion light-years) away. Their merger unleashed an immense amount of energy, with the peak emission rate surpassing the combined power of all luminous stars in the observable universe. Unlike stars, the merger of two black holes does not emit electromagnetic radiation like visible light but instead illuminates the universe with gravitational radiation. These waves traveled freely for over a billion years before being captured by the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors. Upon reaching Earth, these waves caused a minuscule length change between the LIGO mirrors, on the order of 10^(−18) m, a thousand times smaller than a proton.&#13;
&#13;
The unprecedented sensitivity of LIGO requires an extremely low noise level. The design of LIGO as an interferometer converts the gravitational-wave signal to an optical signal, which is measured on photodiodes along with other noises. One of the noise sources is the quantum noise due to the quantum vacuum fluctuations of the light itself. Besides the light, the mirror also has quantum-mechanical features and experiences quantum back-actions when we probe it with light. Knowing the position of the mirror very well would inevitably perturb its momentum, which prevents us from precisely making the next measurement of the position. This is fundamental physics dictated by Heisenberg’s uncertainty principle. In the case of continuous measurement like LIGO, the quantum back-action leads to an apparent sensitivity limit known as the Standard Quantum Limit (SQL). It tells us how precisely we can measure an object with light.&#13;
&#13;
The SQL applies when using uncorrelated photons or coherent light to measure the object, such as a laser beam. However, introducing quantum correlations through squeezed light, a technique called squeezing (Chapter 2), can circumvent this limit. Squeezed vacuum, a non-classical light state, exploits quantum correlations between photon pairs to reduce vacuum fluctuations in one quadrature at the cost of another. By manipulating the quantum correlation between light and the mirror, the squeezed vacuum can potentially reduce quantum noise below the SQL, a concept explored in frequency-dependent squeezing. This thesis develops a first-principle model of quantum noise in LIGO (Chapter 3) and investigates how squeezing can mitigate it while considering practical factors like optical losses and mode-mismatch (Chapter 4). These theories are constructed with a bottom-up approach. Experimental details on generating and utilizing frequency-dependent squeezing for LIGO are also discussed (Chapter 5), culminating in the observation of LIGO’s quantum noise below the SQL (Chapter 6).&#13;
&#13;
Besides squeezing, increasing optical power can also reduce quantum shot noise. Nevertheless, maintaining high power levels (fractions of megawatts) in LIGO is challenging due to experimental imperfections, such as unintended point absorbers on the mirror coating. This thesis analyzes the thermoelastic distortions caused by these absorbers, which limit achievable optical power in current and future gravitational-wave detectors (Chapter 7).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157572</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast dynamics in quantum materials probed by time-and-momentum-resolved techniques</title>
<link>https://hdl.handle.net/1721.1/157571</link>
<description>Ultrafast dynamics in quantum materials probed by time-and-momentum-resolved techniques
Su, Yifan
The interactions of quasiparticles in quantum materials give rise to intriguing phenomena, including magnetism and superconductivity. However, these interactions are often challenging to understand due to the intertwining of multiple degrees of freedom, such as charge, spin, orbital, and lattice. To fully understand such strongly correlated systems, a suite of experimental techniques that respectively probes various degrees of freedom and simultaneously resolves multiple channels, including energy, momentum, time, and space, are highly desired. This poses a significant challenge for the entire community. In this dissertation, I will focus on a series of experiments performed on quantum material systems utilizing several multi-resolution techniques. Ultrafast electron diffraction (UED) and time-and-angle-resolved photoemission spectroscopy (trARPES) are tools that I co-developed with my colleagues at MIT in the past several years. Supplemented by the time-resolved X-ray diffraction (trXRD) setup at free electron laser facilities around the world, they provide direct access to lattice (UED and trXRD) and electronic (trARPES) structures in quantum materials on an ultrafast timescale of a few hundred femtoseconds. The first part of the dissertation will briefly introduce assorted aspects of ultrafast phenomena as well as the fundamental principles and instrumentation of the several time-andmomentum-resolved techniques. Following the introduction to these time-and-momentumresolved techniques, the second part of the thesis focuses on the coherent acoustic phonons in quantum materials observed with UED. The crystalline lattice is the building block of any solid-state system and, thus, the most important aspect in condensed matter physics research. The study of coherent acoustic phonons, the fundamental coherent excitation of the lattice, could be traced back to the 1980s when solid-state ultrafast lasers were first developed. However, the knowledge about the excitation mechanism was not complete. In this part of the thesis, I will introduce a new pathway for launching coherent acoustic phonons: magnetostriction, and discuss the spin-mediated shear oscillator enabled by this mechanism in van der Waals antiferromagnet. I will further discuss the original methodology I developed that uses coherent acoustic phonon detected with UED as a picosecond timescale "lock-in" experiment that senses nano-scale mechanical motions in ultra-thin quantum materials. The last part of the dissertation will focus on charge density wave (CDW) phase transitions in quantum materials. CDWs are systems where strong interplays between electrons and phonons drive the phase transition that causes the modulation of charge density and is thus accompanied by periodic lattice distortions. In this dissertation, I will focus on systems with multiple interacting CDW orders. These systems are ideal platforms for studying the interplays among multiple order parameters. The suite of probes, including UED, trXRD, and trARPES, offers a comprehensive view of CDW systems from both phononic and electronic perspectives. This part of the thesis will examine a series of CDW materials with multiple CDW orders, including ErTe₃, EuTe₄, and, CsV₃Sb₅. Via a series of ultrafast multi-messenger experiments, I will survey various origins and behaviors of CDW interactions and answer longstanding questions about the nature of CDW ground states in these quantum materials. The overarching theme of this dissertation is to establish a paradigm of problem-solving in quantum materials research via a combination of multiple channels acquired from a suite of ultrafast momentum-resolved techniques. Coherent phonons and CDW systems are two of the richest playgrounds in the ultrafast regime. I am going to investigate various cases where an ultrafast laser pulse decodes the intertwined degrees of freedom in quantum materials. The insight developed in these case studies may be carried over to other quantum material systems with emergent quantum states, such as superconductivity and magnetic orders.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157571</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast Terahertz Spectroscopy for the Manipulation and&#13;
Elucidation of Correlated Quantum Materials</title>
<link>https://hdl.handle.net/1721.1/157570</link>
<description>Ultrafast Terahertz Spectroscopy for the Manipulation and&#13;
Elucidation of Correlated Quantum Materials
Allington, Clifford
Light-matter interactions are at the heart of quantum mechanics. The photoelectric effect, blackbody radiation, and the hydrogen emission spectrum were all experimental observations using light and its interaction with matter which led to the discovery of the quantum mechanical nature of the universe. In modern research, the interactions of light with matter plays a significant role in both understanding the properties of, or controlling various aspects of quantum materials, a class of materials whose macroscopic properties are only understood through quantum mechanics. Quantum materials are often categorized into two classes: topological materials or strongly correlated materials, though the cross-over and interplay between these two aspects is a significant field of study as well. Strongly correlated materials exhibit exotic physical phases such as magnetism, superconductivity, or heavy fermion formation due to the strong interactions of electrons. Many of these properties hold significant promise for application, yet the ability to predict correlated physics from a theoretical standpoint is still at a young stage of development. To this end, experimental efforts to demonstrate and understand the interplay between different degrees of freedom in a material (spin, charge, lattice, and orbital) are essential for progressing in this direction.&#13;
&#13;
In this thesis, a variety of light-matter interactions using ultrafast techniques are explored in a set of quasi two-dimensional strongly correlated materials. These are bulk materials, whose properties are strongly founded in the two-dimensional layers stacked on top of one another. A variety of Optical-Pump Terahertz-Probe spectroscopic methods are used to drive a system out of equilibrium while monitoring the low-energy physics in the terahertz (THz) spectral range. This part of the electromagnetic spectrum is essential to understanding many aspects of strongly correlated physics. For example, the charge carriers in a metallic (or photoexcited) material have a strong spectral weight here and many of the collective modes of insulating phases, such as phonons or magnons, occur at these energies as well. Specifically, the collective modes of two van der Waals antiferromagnets are excited coherently with the use of ultrafast optical pulses. In the antiferromagnet NiPS3 a new mechanism for launching a coherent magnon is discovered. In the multiferroic, antiferromagnet NiI2, evidence for a new type of quasiparticle, an electromagnon-polariton, is demonstrated in a non-equilibrium sample. Further, preliminary data regarding the measurement of a new type of Kondo hybridization gap (a pseudogap) in the kagome strange metal Ni3In is reported using the photoexcited dynamics and the Rothwarf-Taylor bottleneck model.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157570</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instrumental Effects in 21 cm Cosmology: One-point Statistics and Power Spectrum with the HERA Interferometer</title>
<link>https://hdl.handle.net/1721.1/157569</link>
<description>Instrumental Effects in 21 cm Cosmology: One-point Statistics and Power Spectrum with the HERA Interferometer
Kim, Honggeun
The epoch of reionization (EoR) signifies a critical phase in the universe’s evolution, marking the shift from a predominantly neutral intergalactic medium to the ionized state observed today. A key aspect of studying the EoR involves observing the redshifted 21 cm line emission with radio telescopes. A significant challenge in this endeavor is isolating the faint 21 cm signals from bright foreground emissions and systematics. This collection of works focuses on understanding the impact of instrumental systematic effects on statistical measurements, such as the one-point statistics and power spectrum, using the Hydrogen Epoch of Reionization Array (HERA). First, I investigate one-point statistics measured from image cubes based on HERA Phase I observations after foreground removal for the first time. I highlight the influence of systematics on these measurements, by measuring the second and third moments. These analyses show that, despite efforts to mitigate systematics, the residual systematics still cause deviations in the measurements from the expected values. In addition, I evaluate EoR models against observational data, suggesting the second moment measurements likely reject the cold reionization model characterized by inefficient X-ray heating. The third moment, which captures non-Gaussianity features of the signals, is significantly diminished by the instrument response and further reduced by the foreground removal process, making it challenging to probe non-Gaussianity. However, there remains the potential to detect some skewness at low redshifts. One potential systematic for HERA involves calibration errors stemming from per-antenna perturbations due to feed misalignment. I have simulated these calibration errors by modeling realistic perturbed primary beams for HERA Phase II observations. The chromatic calibration errors are critical since they can cause foreground emission to contaminate the region of Fourier space expected to be dominated by cosmological signals. I then present the work focused on developing a method to mitigate the calibration errors and foreground leakage, thereby recovering the clean EoR window.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157569</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Studies on the Chelating Ligand Effects&#13;
of Novel Borafluoronium Ions</title>
<link>https://hdl.handle.net/1721.1/157568</link>
<description>Systematic Studies on the Chelating Ligand Effects&#13;
of Novel Borafluoronium Ions
Allen, Marissa D.
This study explores the synthesis and characterization of borafluoronium ions via a ligand-based strategy using bidentate amine and phosphine bases as chelating agents to cationic boronium ions.The borafluoronium complexes A–C were synthesized in high yields (80%–95%) and characterized using NMR spectroscopy and single crystal X-ray diffraction. Further investigations into the coordination of other bisphosphine ligands, such as dppe, rac-BINAP, and Xantphos, resulted in the formation of Lewis adducts rather than the desired borafluoronium ions. The challenges in isolating these species are attributed to steric and chelate effects inherent of the ligands, with NMR analysis providing insights into the coordination chemistry and stability of these complexes.This work advances the understanding of borafluoroniumion formation and the impact of ligand structure on their properties.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157568</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical and core-level X-ray spectroscopy of correlated two-dimensional materials</title>
<link>https://hdl.handle.net/1721.1/157567</link>
<description>Optical and core-level X-ray spectroscopy of correlated two-dimensional materials
Occhialini, Connor Alexander
The intersection of low-dimensionality and strongly correlated electrons in van der Waals (vdW) materials offers a rich landscape of ordered phases and associated excitations for potential applications in nanoelectronics. The coupling between distinct degrees of freedom in correlated materials provide routes to realize novel functional properties, which can be further manipulated by the high tunability intrinsic to vdW materials through, e.g., heterostructures and doping. However, identifying the mechanism of correlated phases poses a fundamental challenge due to coexistent and competing orders. This requires detailed knowledge of the microscopic interactions/excitation spectra, methods to disentangle the individual roles of coexistent orders, and selective probes of symmetry-breaking within different coupled degrees of freedom. In this thesis, we demonstrate the utility and complementarity of resonant X-ray spectroscopy and symmetry-selective optical probes in combination with appropriate external tuning parameters (e.g. strain, pressure, ligand substitution, layer number) for revealing the origin of correlated phases in low-dimensional vdW materials. We first investigate the triangular lattice antiferromagnet NiI₂. Frustrated exchange interactions result in a helimagnetic ground state and spin-induced ferroelectric order, making bulk NiI₂ a type-II multiferroic. Using a combination of optical spectroscopic probes, including Raman, magneto-optics, and second harmonic generation, we demonstrate the persistence of multiferroic order to the single-layer limit. We then aim to resolve the microscopic magnetic interactions and their interplay with the lattice symmetry to identify the origin of the magnetic ground state. Towards this goal, we investigate the magnetic ground state and transition temperature versus hydrostatic pressure and layer number, and directly probe the evolution of magnetic/structural orders with resonant magnetic X-ray scattering/structural diffraction, respectively. From these results, we demonstrate the central role of interlayer exchange interactions and their coupling to the structural symmetry in driving the magnetic ground state of NiI₂. We next investigate the broader class of triangular lattice nickel dihalides, NiX₂ (X = Cl, Br, I), to identify the origin of sharp optical excitations, i.e. excitons, in nickel-based vdW magnets. We employ Ni-L₃ edge resonant inelastic X-ray scattering (RIXS) to access a q-resolved and site-specific view into the excitation spectra. We identify the sharp excitons with spin-forbidden intra-configurational multiplets of octahedrally-coordinated Ni²⁺, which become renormalized by Ni-X charge transfer. We also observe a finite dispersion of these excitations, demonstrating a multiplet delocalization that is controlled by the ligand-tuned charge transfer gap in a process analogous to ground state superexchange. These results establish the microscopic origin of these excitons and provide a mechanism to explain their possible coupling to the magnetic order/excitations. Finally, we study the iron-based superconductor FeSe, which displays a rotational symmetry breaking electronic nematic phase in proximity to unconventional superconductivity without magnetic order. To understand the origin of nematicity, we investigate the ordering of the orbital degrees of freedom using X-ray linear dichroism with in-situ uniaxial strain tuning, electronic transport measurements and structural diffraction. We observe a lattice-independent orbital polarization acting as the primary nematic order parameter. This resolves the orbital origin of nematicity in FeSe and suggests that anisotropic spin fluctuations are the mechanism of unconventional superconductivity.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157567</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collagen-mimetic peptides for diagnosis and analysis</title>
<link>https://hdl.handle.net/1721.1/157566</link>
<description>Collagen-mimetic peptides for diagnosis and analysis
Borgula, Isabella M.
Collagen, the most abundant protein in the human body, is an essential scaffold for tissue development, regulation, and homeostasis. As a major structural component of the extracellular matrix, collagen is not static. Rather, it is highly diverse and dynamic, actively participating in tissue physiology. Collagen can be a challenging protein to study due to its massive size and heterogeneity across subtypes. A valuable tool to study and better understand collagen is a technology known as collagen-mimetic peptides (CMPs), which are synthetic peptides that mimic the natural structure of collagen. These peptides can be applied to study collagen structure and function, from its macromolecular architecture in tissues to the significance of molecular modifications on its amino acid sidechains. This thesis explores the application of CMPs in diagnostic applications, in which CMPs detect aberrations in native collagen, and analytical contexts, in which CMPs act as a simplified system to understand collagen biochemistry. Chapter 2 investigates the ability of CMPs to identify collagen remodeling in a mouse model of pulmonary fibrosis, demonstrating their potential as non-invasive diagnostic tools for fibrotic diseases. Chapter 3 analyzes the collagen-rich desmoplastic reaction surrounding PDAC in murine models and human samples, highlighting the utility of CMPs in characterizing tumor microenvironments. Finally, Chapter 4 examines the structural implications of threonine phosphorylation on collagen stability, showcasing the value of CMPs in studying posttranslational modifications. The findings discussed in this thesis lay a foundation for future CMP applications in targeted drug delivery and biomaterials design.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157566</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization under ecological realism reproduces signatures of human speech perception</title>
<link>https://hdl.handle.net/1721.1/157565</link>
<description>Optimization under ecological realism reproduces signatures of human speech perception
Magaro, Annika K.
Recent advances in machine learning have made real-world perception tasks feasible for computers, in many cases approaching levels of performance similar to those of humans. In particular, optimizing models for ecologically realistic training datasets has helped to yield more human-like model results. In the field of speech recognition, models trained under realistic conditions with simulated cochlear input reproduce some characteristics of human speech recognition. However, it is unclear how similar the behavior of these models is to that of humans across the many ways in which speech can be manipulated or degraded, since human and model behavior have not been extensively compared. In this paper, we address this question by comprehensively testing a neural network model trained in ecological conditions across a large set of speech manipulations, comparing its behavior to that of humans. We find that training in ecological conditions yields a fairly good overall match to human behavior, with some discrepancies that can be largely resolved by training specifically on these conditions. The results support the idea that the phenotype of human speech recognition can be understood as a consequence of having been optimized for the problem of speech recognition in natural conditions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157565</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precision Metrology with Ytterbium Ions for New Physics Search</title>
<link>https://hdl.handle.net/1721.1/157564</link>
<description>Precision Metrology with Ytterbium Ions for New Physics Search
Kniazev, Evgenii
Modern physics faces a growing discrepancy between the success of the Standard Model and the body of evidence pointing to the New Physics beyond it. A powerful method of New Physics searches is using quantum sensing tools based on Atomic, Molecular, and Optical physics. In particular, the modern optical atomic clocks demonstrate unprecedented accuracy and precision. Complementary to high-energy searches with particle colliders, the atomic clocks are used to place stringent bounds on tests of fundamental physics. One of the possible candidates for physics beyond the Standard Model is a carrier of a fifth force. Such a hypothetical particle that mediates interactions between leptons and quarks can potentially be detected in a tabletop atomic clock experiment. In particular, the isotope shift measurements may show sensitivity to coupling induced by such particles. In this thesis, we describe the efforts to place bounds on this particle using isotope shifts of optical transitions in Ytterbium. We conduct the isotope shift experiment by measuring ions one at a time and in a co-trapped configuration following the protocol of correlation spectroscopy. We study the systematic uncertainty budget for both types of measurements. We apply the King plot method to isotope shift spectroscopy data and observe the King nonlinearity. Using the analysis of the nonlinearity patterns, we determine the significance of the second source of the King nonlinearity with a currently unknown source.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157564</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emergence, Formation and Dynamics of Hot QCD Matter</title>
<link>https://hdl.handle.net/1721.1/157563</link>
<description>Emergence, Formation and Dynamics of Hot QCD Matter
Scheihing Hitschfeld, Bruno Sebastian
Understanding the dynamics of Quantum Chromodynamics (QCD) in quantitative detail is one of the main frontiers in particle physics. While the last century gave us the formulation of the theory of nuclear interactions, QCD, as well as that of the rest of visible matter encoded in the Standard Model of Particle Physics, much remains to be understood. In particular, the hot QCD matter produced in high energy collisions of heavy ions presents a unique challenge to theory and phenomenology due to the vast number of different phenomena that take place in such a collision, and even more so because it is an out-of-equilibrium process. In this thesis, we make progress in two concrete directions in the vast landscape of hot QCD physics. The first one is quarkonium transport inside quark-gluon plasma (QGP), the high temperature phase of QCD. Over the past two decades it has been realized that a significant fraction of quarkonium suppression in high energy heavy ion collisions comes from dynamic dissociation and recombination processes, instead of static screening of the interaction potential as originally proposed by Matsui and Satz. Our contribution is the formulation of the precise correlation functions in QCD at finite temperature that describe the dissociation and recombination processes of heavy quarkonium in QGP, as well as their calculation in weakly coupled QCD and strongly coupled N=4 supersymmetric Yang-Mills theory. We also formulate the Euclidean version of these correlation functions so that they may be calculated using Lattice QCD techniques. In this way, our results provide the necessary ingredients to carry out an analysis of the suppression of ϒ states in heavy ion collisions in terms of the parameters of the QCD lagrangian.&#13;
The second contribution we make is the development of tools to understand the process of hydrodynamization in QCD kinetic theory and their application to a simplified description where only a subset of the QCD scattering mechanisms are included. By doing this, we learn that the process of hydrodynamization in this theory, and specifically, how memory of the initial condition is lost, follows the recently proposed Adiabatic Hydrodynamization scenario.&#13;
Concretely, hydrodynamization proceeds through a sequential process in which a monotonously shrinking set of low-energy states dominate the dynamics, where the opening of an energy gap relative to the ground state(s) signals the start of each stage of this process. The hydrodynamic attractor is reached when only one low-energy state remains as the ground state, and the system approaches local thermal equilibrium following the adiabatic evolution of this low-energy state.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157563</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some experiments on the volatile oil of the Myrcia Acris</title>
<link>https://hdl.handle.net/1721.1/157491</link>
<description>Some experiments on the volatile oil of the Myrcia Acris
Fish, Chas. C. R.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1877
</description>
<pubDate>Mon, 01 Jan 1877 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157491</guid>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetooptical studies of small-gap semiconductors : Hg₁₋ Cd Te and InSb</title>
<link>https://hdl.handle.net/1721.1/157490</link>
<description>Magnetooptical studies of small-gap semiconductors : Hg₁₋ Cd Te and InSb
Weiler, Margaret Horton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1977; Vita.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157490</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anthracene, and its isomer, phenanthrene, and their derivatives; with a short investigation of the methods for the quantitative estimation of phenanthrene and anthracene</title>
<link>https://hdl.handle.net/1721.1/157489</link>
<description>Anthracene, and its isomer, phenanthrene, and their derivatives; with a short investigation of the methods for the quantitative estimation of phenanthrene and anthracene
Fletcher, Chas. R.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157489</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of competition in freight transportation to and from Boston, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/157488</link>
<description>A study of competition in freight transportation to and from Boston, Massachusetts
Luykx, H. M. C.; McHugh, G. E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1931; Appendix contains numerous pamphlets.
</description>
<pubDate>Thu, 01 Jan 1931 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157488</guid>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Terminal problem of an industrial railroad</title>
<link>https://hdl.handle.net/1721.1/157487</link>
<description>Terminal problem of an industrial railroad
Lyons, H. M.; Lucy, E. D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1925; Includes bibliographical references (leaves 23-25).
</description>
<pubDate>Thu, 01 Jan 1925 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157487</guid>
<dc:date>1925-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characteristics of epoxy resin-kaolinite compositions</title>
<link>https://hdl.handle.net/1721.1/157486</link>
<description>Characteristics of epoxy resin-kaolinite compositions
Waugh, George H.; Feldman, Marnin.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1957; Bibliography: leaf 36.
</description>
<pubDate>Tue, 01 Jan 1957 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157486</guid>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The use of discriminators in the linear detection of F-M signals</title>
<link>https://hdl.handle.net/1721.1/157485</link>
<description>The use of discriminators in the linear detection of F-M signals
Lu, Pao-Wei.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1944; Includes bibliographical references (leaf 51).
</description>
<pubDate>Sat, 01 Jan 1944 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157485</guid>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcription of the adenovirus genome during productive infection of HeLa cells.</title>
<link>https://hdl.handle.net/1721.1/157484</link>
<description>Transcription of the adenovirus genome during productive infection of HeLa cells.
Price, Richard Pearsall.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1972; One unnumbered leaf inserted.; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157484</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The action of colocins E1 and K on proline transport in isolated membrane vesicles of E. coli.</title>
<link>https://hdl.handle.net/1721.1/157483</link>
<description>The action of colocins E1 and K on proline transport in isolated membrane vesicles of E. coli.
Kabat, Jonathan Peter.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1971; Seventeen unnumbered leaves inserted. Vita.; Bibliography: leaves 105-107.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157483</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms of fatigue crack initiation and propagation in an age hardenable aluminum alloy.</title>
<link>https://hdl.handle.net/1721.1/157482</link>
<description>Mechanisms of fatigue crack initiation and propagation in an age hardenable aluminum alloy.
Erhardt, Karl Edward.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Vita.; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157482</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symplectic fibrations and weight multiplicities of compact groups</title>
<link>https://hdl.handle.net/1721.1/157481</link>
<description>Symplectic fibrations and weight multiplicities of compact groups
Lerman, Eugene.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1989; Includes bibliographical references (p. 71-72).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157481</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Valuation model for Less Developed Countries' Debt in the secondary market</title>
<link>https://hdl.handle.net/1721.1/157480</link>
<description>Valuation model for Less Developed Countries' Debt in the secondary market
Carballo, Carlos Federico.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1989; Includes bibliographical references (leaves 75-79).
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157480</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Activation of the c-Ha-ras oncogene</title>
<link>https://hdl.handle.net/1721.1/157479</link>
<description>Activation of the c-Ha-ras oncogene
Tabin, Clifford James.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 1984; Bibliography: leaves 202-214.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157479</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Urban space heating with a heat pump-condenser temperature water system</title>
<link>https://hdl.handle.net/1721.1/157478</link>
<description>Urban space heating with a heat pump-condenser temperature water system
Yee, Wee Tong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157478</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magneto-optical studies in In[subscript 1-x]Ga[subscript x]As[subscript y]P[subscript 1-y] semiconducting alloys</title>
<link>https://hdl.handle.net/1721.1/157477</link>
<description>Magneto-optical studies in In[subscript 1-x]Ga[subscript x]As[subscript y]P[subscript 1-y] semiconducting alloys
Alavi, Kambiz.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1981; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157477</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk analysis for earthquake-induced ground failure by liquefaction.</title>
<link>https://hdl.handle.net/1721.1/157476</link>
<description>Risk analysis for earthquake-induced ground failure by liquefaction.
Yegian, Mishac K.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 281-292.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157476</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Average frequency trajectory control : normal mode.</title>
<link>https://hdl.handle.net/1721.1/157475</link>
<description>Average frequency trajectory control : normal mode.
Yared, Khaled Ibrahim.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157475</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The implementation of a joint disaggregate demand model in an urban simulation</title>
<link>https://hdl.handle.net/1721.1/157474</link>
<description>The implementation of a joint disaggregate demand model in an urban simulation
Worms, Vincent Robert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 114-115.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157474</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photosynthetic regeneration of ATP using native and immobilized bacterial chromatophores.</title>
<link>https://hdl.handle.net/1721.1/157473</link>
<description>Photosynthetic regeneration of ATP using native and immobilized bacterial chromatophores.
Yang, Ho Seung.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1976; Vita.; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157473</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pressure broadening of infrared absorption lines at moderate densities.</title>
<link>https://hdl.handle.net/1721.1/157472</link>
<description>Pressure broadening of infrared absorption lines at moderate densities.
Wormhoudt, Joda Cornelius.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1976; Vita.; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157472</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental verification of breeding performance of fast reactor blankets.</title>
<link>https://hdl.handle.net/1721.1/157471</link>
<description>Experimental verification of breeding performance of fast reactor blankets.
Wu, Shin-Shyong.
Thesis: Nuc. E., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1976; Bibliography: leaves 154-158.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157471</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geology of Eastern Massachusetts</title>
<link>https://hdl.handle.net/1721.1/157470</link>
<description>Geology of Eastern Massachusetts
Crosby, William O.
            (William Otis),
            1850-1925.
Thesis: B.S., Massachusetts Institute of Technology, Department of Geology, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157470</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Echoes From the Stone Reframing Preservation in Syria Through Haurani Folklore</title>
<link>https://hdl.handle.net/1721.1/157368</link>
<description>Echoes From the Stone Reframing Preservation in Syria Through Haurani Folklore
Alrifai, Hajar
Partially buried in the landscape of Hauran in southern Syria, my family’s 1500-year-old house, Alali—formerly a Byzantine church—further erodes with each passing year. Throughout the decades, the house has been subjected to various forms of destruction: from development, demolition, and rocket strikes to violent reconstruction. Its crumbling stones are laden with the memories of four generations and echo with a way of life that is disappearing. At the heart of Hauran are the fellahin, farmers who permanently settled in its villages in the late 19th century. As they settled, the fellahin reclaimed, inhabited, dismantled, and rebuilt the Byzantine structures, often rearranging or reimagining the original programs: chapels, houses, and cemeteries. In my family’s border village of Nasib—a place both liminal and at the margin—this rich local history lives not in formal archives but in scattered material like architectural ruins, oral poems, folk songs, diasporic transcripts, and 8mm video cassettes, many of which resonate as sonic artifacts. What began as a project of documenting the decay of our old house evolved into a meditation and manifesto on preservation outside the purview of top-down institutions. Through creative writing and cinematic intervention, Echoes from the Stone asks: what does it mean to preserve a place, and preservation for whom? In this proposed paradigm, ‘story’ becomes integral to architectural preservation. This story of Alali interweaves my journal entries with the encounters of my greatgreat grandfather, Hassan Ali, an oral poet who founded the village. I further draw from my grandfather Faisal’s diaries, our family’s archival videos, and interviews with Nasib’s elders, including my grandmother Um Ghazi, an olive farmer, and Um Saado, a Bedouin matriarch and shepherd who once lived in the old home with her family. By foraging for this counter-archive of living memories, I reveal intergenerational intersections which complicate and reimbue the colonial history of the village—and of Syria—with voices that echo from the stone, voices that persist and whisper from the ground, from across borders and oceans, and from within. This interdisciplinary chronicle draws from architecture, agriculture, literature, anthropology, and film, to reconstruct a social history of the village and speculate on alternate ways of dwelling, building, and preserving— reclaiming the archive, reinserting narrative, and reframing heritage through the folklore of Hauran.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157368</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wave Mechanics in Constructed Oyster Reefs and the Design of Nature-Based Coastal Adaptation</title>
<link>https://hdl.handle.net/1721.1/157367</link>
<description>Wave Mechanics in Constructed Oyster Reefs and the Design of Nature-Based Coastal Adaptation
Brice, James Vincent
There has been great interest in the potential of constructed oyster reefs (CORs) to function as nature-based coastal protection infrastructure, but most projects to-date are designed primarily for wave attenuation and fail to consider both the environmental conditions necessary for long-term oyster reef sustainability as well as the importance of education and outreach in fostering environmental stewardship. Realizing the promise of nature-based coastal adaptation means building physical, ecological and social infrastructure simultaneously, requiring a design-research methodology that combines an understanding of biological design constraints, physical analysis and community engagement. &#13;
&#13;
Physical and numerical wave flume experiments were conducted to investigate mechanisms of wave energy loss in oyster shell gabion-type CORs that place oyster biology in the foreground— particularly, the influence of across-shore width, spacing and structure porosity on wave attenuation under non-breaking wave conditions. Gabion widths of O(1) wavelength were found to attenuate waves by 40%. These losses were driven primarily by internal drag which was characterized experimentally and accurately modeled with the modified Ergun Equations and the waves2Foam library of the open-source CFD software OpenFOAM. &#13;
&#13;
This research was then translated into a suite of interactive design activities, featuring a tabletop wave flume, scale models of coastal features, and a set of coastal community member cards. Through design and creative inquiry, these tools seek to communicate complex biophysical processes in coastal ecosystems while empowering communities to reimagine what it really means to "build with nature".
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157367</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>When the Earth Breathes: An Anthology of Volcanic Urbanism</title>
<link>https://hdl.handle.net/1721.1/157366</link>
<description>When the Earth Breathes: An Anthology of Volcanic Urbanism
Carucci Alvarez, Maria Gabriela
Malpaís. A Spanish word used in volcanically-active landscapes to refer to the new basalt terrain that solidifies after an eruption. It translates literally to “bad country”, and it is defined as a “sterile, arid surface”. This thesis looks at the Tajogaite volcano, the most recent eruption in La Palma, one of the youngest of eight islands in the oceanic volcanic arc formation of the Canary Islands. It positions this event not as a unique site but as a manifestation of a network of bureaucratic colonial imaginaries that still operate within a disaster relief framework that exists in volcanic landscapes throughout the world. Together, these imaginaries draw an unyielding binary narrative about volcanoes as purely destructive entities, and further dismiss the porosity that exists between the geos, the bios and the polis. Igneous landscapes, through the production of new basalt floors, rich soils and ocean intrusions, traverse and redefine property boundary lines and national coastlines, which extends beyond plan views and into sectional shifts. This project aspires to spatialize the temporal moments of one volcanic eruption, questioning, ultimately, how the ownership of materials in flux, along with their transformations, can reframe our imagination of a city-volcano production that frames both as ephemeral, ever changing entities. Through ten allegories, cities are positioned inside of the geological realm, and are de-centered to contextualize them within a volcano’s lifespan. The first five stories describe the current framework, while the other half become allegories through which architecture and urbanism are leveraged as tools through which to understand the earth’s movements at different scales, temperatures and states of matter, in order to provide an alternative imaginary to current answers to the question of volcanic urbanism.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157366</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-driven Home Workspace Design: Interactive DIY Platform Mediating the User and Expert Literature</title>
<link>https://hdl.handle.net/1721.1/157365</link>
<description>Data-driven Home Workspace Design: Interactive DIY Platform Mediating the User and Expert Literature
Yi, Wangli
After COVID-19, some employees have opted to continue working from home (WFH) or have chosen a hybrid working mode. Previous research has shown that satisfaction with the physical environment and characteristics of home workspaces are directly related to mental health, which can affect productivity and well-being. This underscores the need for better designed WFH environments. This study explores the use of data-driven tools in interior design to enhance WFH setups. It posits that these tools can transcend traditional design limitations by incorporating professional expertise and facilitating user- driven design processes.&#13;
The tool's backend is built on a comprehensive collection and classification of research literature on WFH environments, creating an interactive platform where users can engage directly in the design process. This is achieved through real-time, machine-mediated suggestions that enhance well-being without the need for professional human designers. Employing a user-centered design framework, the study develops and tests a prototype to assess its effectiveness in empowering users to intentionally and sensitively redesign their home workspaces.&#13;
Results show that participating graduate students became more aware of their WFH environment during the design process, but largely it did not change their existing workspace decisions. This observation indicates the potential benefit of this interactive machine-mediated system as a design education tool. Further test on other demographic groups, such as those who need to focus for long hours professionally at home as well as those who are specifically concerned with mental health issues, is anticipated as the next step for the evaluation of this platform.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157365</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Mindful Consumption</title>
<link>https://hdl.handle.net/1721.1/157364</link>
<description>Towards Mindful Consumption
Covarrubias, Juliana
Humanity is using up more of the Earth's resources than the planet can replenish each year. The unsustainable rate at which humanity is depleting the Earth's resources threatens the viability of our current lifestyles, posing significant challenges for future generations. Further, it places a heavy burden on the planet, resulting in several environmental problems, most notably climate change. Many approaches to combating climate change focus on lessening the impact of our current living habits on the Earth. Popular initiatives involving biodegradability, recycling, and carbon offsetting seek to reduce the effects of pollution while allowing humanity to keep consuming products at the same rate. Alternatively, reducing the production of these goods in the first place eliminates the need for such anti-pollution interventions downstream. This thesis considers climate change at one of its sources: overconsumption. The thesis examines the history of consumer culture to identify the causes of our current excessive consumption patterns. Through analyzing the influences that advertising and culture have on our behavior, this thesis aims to demystify and uncover the power we have over our actions as consumers. The final output of this thesis is a handwritten book of thoughts and sketches that is distributed around the public sphere to provoke conversations about our individual relationships with consumerism. These discussions may have broader implications as they spread and lead to behavioral shifts towards more mindfully consumerist lifestyles. Ultimately, this thesis uses a dialogue with itself to plant a seed challenging the status quo of overconsumption, catalyzing meaningful discussion about our responsibilities, behaviors, and concerns in a consumption-driven world.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157364</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tools for Togetherness: Building Social Networks through Public Tool-making</title>
<link>https://hdl.handle.net/1721.1/157363</link>
<description>Tools for Togetherness: Building Social Networks through Public Tool-making
Liu, Yanjun Emily
Our world appears connected with pervasive technology and information saturation. However, beneath the surface, a deep sense of disconnection and individualism persists, exacerbated by the COVID-19 pandemic, which claimed 7 million lives and prompted a reassessment of our societal values towards more collective orientations. This thesis investigates how individuals can help foster a society that values care, support, and mutual aid. By developing, documenting, and disseminating self-organized public tools—including flyers, posters, and installations that facilitate relationship-building—this work aims to challenge the prevailing alienation by demonstrating the importance of connectivity and exchange. Embedding mutual support and connectivity into daily routines should not merely be a contingency for crises but a fundamental component of our reality. The essence of this project is to disseminate this concept through public engagement installation art within the MIT and greater Cambridge community to cultivate awareness and actively engage the audience. The book details three social experiments designed to enhance connectivity and mutual support, with detailed documentation and reflections from a facilitator’s perspective on the complete process for anyone who hopes to start practicing small.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157363</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Waste to Structure: A Deep Reinforcement Learning Approach to Circular Design</title>
<link>https://hdl.handle.net/1721.1/157362</link>
<description>From Waste to Structure: A Deep Reinforcement Learning Approach to Circular Design
Sørensen, Karl-Johan I.
The design-to-construction process of buildings predominantly follows a top-down linear workflow, where a design is drawn and subsequently refined to determine the required materials and components. This approach assumes an infinite material supply or the capability to manufacture what is needed for the design. Constructing in this manner is resource-intensive and wasteful, making it incompatible with our global climate goals. One way to significantly reduce our material and environmental footprint is by extending the lifespan of building materials through circular design practices. In this approach, the available materials define the architecture, inverting the process from top-down to bottom-up. This method, known as Inventory-Constrained Design, enables the creation of new buildings using materials sourced from construction and demolition waste streams. These inventories, characterized by their non-standard and uniquely varied elements, are hard to design with due to the enormous quantity of possible combinations of even a few discrete elements. Identifying a feasible design that aligns with the designer's intent and meets functional requirements becomes an overwhelmingly time-consuming task, heavily reliant on manual trial and error. Computational optimization has been implemented to automate the process, but state-of-the-art algorithms still require manually pre-defining a parametric target design-space or take too long to compute when applied to larger problems.&#13;
&#13;
This thesis proposes a new method for circular design utilizing Deep Reinforcement Learning (RL) to design structures, requiring only a design gesture and the inventory as input. It works by training an artificial neural network to sequentially assemble a structure from inventory elements, following the gesture while meeting a structural goal. Hence, the design layout directly arises from available inventory. After training, the neural net can be employed instantaneously to design new structures with new inventories without any significant computational expense. To evaluate the effectiveness of the RL method, it is applied to the specific problem of inventory-constrained design of planar roof trusses and demonstrated in a realistic example of assembling a long-span roof from a disassembled transmission tower.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157362</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exœrcising a Haunted City</title>
<link>https://hdl.handle.net/1721.1/157361</link>
<description>Exœrcising a Haunted City
Wong, Bryan Hon Ting
With the looming threat of cultural erasure posed by Hong Kong’s repatriation to China no later than 2047, rituals emerge as the last resource sustaining the collective identity of the city. This thesis documents, through the study of local Taoist-Buddhist practices, the choreographies of rituals as a reparative tool to resist the disap- pearance of local culture. It is linked to findings from everyday domestic offerings to ancestors, annual festive performances of traumatic cleansing, and the booming clientele businesses of precautionary rites, all of which demonstrate their spatial and temporal qualities as methods to resist modern state control.&#13;
&#13;
To retain the residue of pre-modern practices as a critique of socio-political turmoil, this thesis suggests an alternative design that preserves and promotes the annual ghost festival for public participation. By revising the festival’s pilgrimage route and ritual sheds, this thesis transforms the traditional nature of ephemeral scaffold- ings into permanent poles and follies. Situated along the city’s most haunted public estate, these structures are programmed as public facilities for fitness training and children’s playscapes. During the festival, they will be activated into ritual sheds, demonstrating a formal and functional contrast between the everyday and the ritu- al—from form to formlessness, exposure to closure, and lightness to heaviness.&#13;
&#13;
Designed to evade institutional surveillance, these clandestine transformations preserve solidarity and identity not by emphasizing the significance of priests exorcising in rituals, but by highlighting the quotidian motor memories developed from locals exercising within. The duality of ritual and everyday movements shall exercise the ghosts of a haunted city.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157361</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Common Grounds in Shared Waters Integrated Design for Negotiating Equitable Development in Gosabara-Mokarsagar</title>
<link>https://hdl.handle.net/1721.1/157360</link>
<description>Common Grounds in Shared Waters Integrated Design for Negotiating Equitable Development in Gosabara-Mokarsagar
Mehta, Dhwani
Along the west coast of India, in the waters of Gosabara-Mokarsagar, conflicting visions for the landscape mix and muddle. In 2016, Muslim fisherfolk of Gosabara, 100 families, already marginalized by religious, caste, and class distinctions, were banned from fishing, which was their sole traditional livelihood due to environmental protection claims. This led the community to file a petition for mass euthanasia to protest the loss of their rights. Despite their protests, the Government of India announced the Kerly Recharge Reservoir Ecotourism project in 2022 that overlooked their needs, threatened their cultural identity linked to fishing, and exacerbated their traumatic history of displacement that dates back to India and Pakistan’s 1947 partition. &#13;
&#13;
Although many groups’ contested visions map onto the shared waters of Gosabara-Mokarsagar, the fisherfolk are particularly excluded from decision-making processes. Finding a singular common ground among the contesting groups is challenging due to vast differences in power, position, and privilege. This thesis, therefore, aims to ensure equitable representation for all stakeholders, particularly disempowered fisherfolk, by  an integrative design approach of forging a network of multiple ‘common grounds.’ The term ‘common grounds’ defines partnerships of two or three stakeholders, instead of all, based on mutual understanding and shared objectives like sustainable livelihoods, economic development, ecotourism, and avian conservation. &#13;
&#13;
First, I established a common ground with a local NGO, Mokarsagar Wetland Conservation Committee, by using photography, videography, and drawings to raise public awareness about this unique landscape. Initially intuitive and later strategic, I represented the lush waters as a shared home for both the fisherfolk and the birds. Second, I present a network of localized design strategies to enable partnerships that position the NGO as a mediator between the government and local communities, especially the fisherfolk, enabling it to foster alternative models of environmental stewardship. Through these partnerships, rooted in figurative ‘common grounds,’ the fisherfolk become primary, active collaborators in development processes. This thesis creates the conditions for a more equitable development model for this landscape by using design to enable grassroots partnerships that integrate communities into ecological conservation and economic growth projects.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157360</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Office of Back of House</title>
<link>https://hdl.handle.net/1721.1/157359</link>
<description>Office of Back of House
Bilal, Ekin
Office of Back of House (OoBoH, pronounced “ooh-boo”), is an architectural practice that operates at the intersection of ducts, conduits, scaffolding, custodial carts, mechanical rooms and sheds. OoBoH conducts design experiments in and around these maintenance objects and spaces typically separated from “architecture-proper.” By looking at the regulations, funding initiatives, zoning amendments and energy consumption routines that rule these spaces, OoBoH questions the boundaries that separate them from the “front of house” to begin with.&#13;
These “back of house” spaces exist right inside the thick poché line that bounds what is thought to be the domain of design. Back of house (BoH) is dictated by an obscured regime of maintenance processes, and by leveraging these currently unexamined spaces, OoBoH believes that they can become the site for tactical design interventions and new visions of maintenance culture. OoBoH is an attempt at entering architecture from the back door, re-characterizing existing buildings as dependent on the spaces and labor often hidden behind&#13;
pastiche and façade.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157359</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tectonics of the semi-permanent: Reassembling fit-out architecture</title>
<link>https://hdl.handle.net/1721.1/157358</link>
<description>Tectonics of the semi-permanent: Reassembling fit-out architecture
Schnitzler, Jenna
In New York engineer Reginald Pelham Bolton’s 1911 obsolescence study “Building for Profit: Principles Governing the&#13;
Economic Improvement of Real Estate”, he foretold a truth that remains today, that “the useful or economic existence of&#13;
all classes of buildings, in the rapid march of modern conditions, is constantly shortening” (Bolton, 68). He details how&#13;
the parts of buildings lose value at different rates—as they physically deteriorate, materials wear and things fall out of style,&#13;
but even more quickly, he notes, do our structures become economically obsolete. Then and still today the durability of&#13;
building materials is the least of our concerns when considering functional obsolescence. The physical is almost certain to&#13;
exceed the economic durability of a building as a whole.&#13;
Designers and developers recognize this gap between physical and economic obsolescence, and in response have called&#13;
for a moratorium on new construction—opting instead to convert existing structures to meet changing programmatic&#13;
demands. Yet in these conversions, we use the same extractive methods as new construction, filling existing frames and&#13;
envelopes with non-structural light framing to differentiate the space inside. In this paradigm, to build inside an existing&#13;
frame still relies first on the tool of demolition.&#13;
The uneven wearing that Bolton wrote about in 1911, appears again in the iconic shearing layers diagram from Frank&#13;
Duffy and Stewart Brand, who make a very similar economic argument, demonstrating that the economically fast-wearing&#13;
interior layer accumulates the most investment over time, rebuilt on a cycle of every 5-10 years. We are facing a turning&#13;
point in building; as of 2020, over 35% of total construction activity is renovation work, and we are making increasingly&#13;
rapid changes to building function. This creates a paradigm of fit out architecture that answers unpredictability and&#13;
shifting values with indeterminacy, perpetuating a cycle of repetitive building. This project takes the converted structure&#13;
as its starting point, experimenting with disassembly, reassembly, and the boundaries between fit out and frame, sited&#13;
within a larger material and economic framework that expands the definition of “value” beyond the monetary to include&#13;
material resources embodied by a given structure.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157358</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Engineering Design for Reusable Concrete&#13;
Building Structures</title>
<link>https://hdl.handle.net/1721.1/157357</link>
<description>Automated Engineering Design for Reusable Concrete&#13;
Building Structures
Wongsittikan, Pitipat
Concrete contributes to 8% of global CO2 emission through reinforced concrete (RC) structural system. Unlike steel and timber structures, RC components are rarely reused due to the inseparable phase between concrete and steel. This results in down cycling of the components into aggregates or landfill material. The Pixelframe structural system [1] was proposed to facilitate the reusability of concrete components by implementing the existing external post-tensioning system in bridge structures and fiber reinforced system to design building beams and columns. This work presents an automated engineering design workflow for Pixelframe, including a engineering mechanics of the system that conforms to ACI 318- 19 [2] and fib Model Code 2010 [3], half-scale tests to verify the preliminary behavior of the system, and a scalable design algorithm for minimum embodied carbon designs. The workflow also uncovers new insights on choosing ranges of concrete strengths based on the element lengths and potential carbon reduction from refining the number of different concrete strengths in a building. This work demonstrates the utilization of existing building systems in the context of reusability and the potential of automated computational structures in aiding the design decisions to facilitate the circular economy of concrete structures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157357</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Salt to Scale: The Seasoning of Buildings</title>
<link>https://hdl.handle.net/1721.1/157356</link>
<description>Salt to Scale: The Seasoning of Buildings
Battikha, Christina
We exist in thick layers of ancient minerals and material formations that perform to shape human architectural practices. Yet, with a continuous desire to force materials into designs, humanity has never ceased to disregard the active strength of a material to perform with time. The next twenty years align with a future of salt in the form of a dynamic, preservative, and corrosive mineral that shall never expire from Earth’s crust. Nevertheless, aspiring to mine, build, maintain, and preserve, humanity remains in constant search of other more durable materials designed with the presumption to last forever.&#13;
&#13;
Salt is certainly not the neutral product of a chemical reaction. It actively performs to preserve, corrode, accumulate, or maintain humanity’s creations. Embracing its ability to expand and reduce timescales, I investigate salt as a material that provides both corrosive and preservative properties offering current architectural practices the choice and responsibility of building for eternity or for a finite moment.&#13;
&#13;
I explore ancient salt cycles shaping the last human activities remaining on the Eastern coast of the Mediterranean, in Anfeh, Lebanon. Molded into a series of geo-cultural objects, salt containers embrace their materiality and escape the dullness of a mold to acknowledge the continuous cultural cycles that exists between time, salt, and its people.&#13;
&#13;
This thesis invites current design and construction practices to think across new intervals of time that reflect the building and un-building capacities of salt as a scalable mineral contributing to a salty architectural ritual that passes from generation to the next; a source of luck amidst a time of ongoing crisis. Providing recipes from a salty kitchen, the work integrates seasonal practices to mine and craft salt into animate typologies embracing the forces of salt to challenge the standard architectural practice against one that thinks with the durations of salt.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157356</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Friction and Spectroscopic Probes of New Physics with Trapped Ions</title>
<link>https://hdl.handle.net/1721.1/157355</link>
<description>Surface Friction and Spectroscopic Probes of New Physics with Trapped Ions
Counts, Ian T.
To trap a single ion under vacuum is to control a microscopic, isolated quantum laboratory. This thesis describes two research programs made possible by single-ion trapping of Yb⁺. The first program is a study of a paradigmatic frictional interface: a trapped ion transported along a one-dimensional multistable energy landscape formed by a periodic, corrugated optical potential and a harmonic electric trapping potential. Two regimes of friction behavior are differentiated: single-slip (whereby an ion slips out of a corrugated groove and sticks into its neighboring groove) versus multislip (whereby the ion instead sticks into its next-neighboring or next-next-neighboring groove). By varying transport speed and corrugation depth, experimental signatures of both regimes are measured and used to write a predictive Boltzmann model. At low enough corrugations, the ion can be expected to tunnel through (in addition to slip over) the potential barriers of the energy landscape, leading to a reduction in friction termed quantum lubricity. Attempts at seeing quantum tunneling via static Rabi oscillations are described. While no smoking-gun signature of tunneling was repeatable, the suppression of quantum tunneling behavior is attributed to certain technical limitations of the experiment apparatus, and possible remedies are considered. The second (and largest) research program of this thesis is a probe of new physics via isotope shift spectroscopy. Shift measurements are taken to sub-kHz precision across all five even-numbered isotopes of Yb+ along three clock transitions (quadrupolar shifts S₁/₂ → D₅/₂, S₁/₂ → D₃/₂, and octupolar shift S₁/₂ → F₇/₂). Deviations from theoretical predictions have been found and indicate higher-order Standard Model effects or even beyond-the-Standard-Model physics. Spectroscopic design and shift results, as well as possible theoretical conclusions, are discussed.
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157355</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>In Tension: Computational exploration of the design space of tensile network structures</title>
<link>https://hdl.handle.net/1721.1/157354</link>
<description>In Tension: Computational exploration of the design space of tensile network structures
Burke, Adam T.
Cable and rope net structures are lightweight tensile systems and generally cannot resist&#13;
compression or bending. Tensile network structures are often used to span long distances&#13;
without intermediate supports and have found applications in art, architecture, and structural engineering due to their physical and visual lightness. However, the design of tensile&#13;
net structures is generally challenging since their form cannot be arbitrarily defined. Instead&#13;
a process of form-finding must be used to establish a geometry where all edges of the network&#13;
carry only tensile forces.&#13;
Physical models and computational methods can be used for the form-finding of tensile&#13;
network structures; however the primary challenge in the design process is the adjustment of&#13;
the network parameters to achieve a specific design. Recent work has shown that automatic&#13;
differentiation software packages can be used to efficiently design funicular structures (that&#13;
is, those that work in pure tension or pure compression) with additional designer driven&#13;
objectives, but these techniques remain largely inaccessible to general designers, architects,&#13;
and engineers due to the involved process of problem setup and limited interactivity of&#13;
existing tools.&#13;
To address this limitation, I introduce a new tool set consisting of two main components, Ariadne and Theseus. These components take advantage of automatic differentiation&#13;
of objective functions for efficient tensile network simulation and provide a user interface&#13;
for architects, engineers, and other designers as a plugin for a commonly used 3d modeling&#13;
software. In this thesis, I outline the structure and features of this tool set, show results of&#13;
networks optimized with different composable objectives, and show some fabricated examples. Next, I explore the the generation of more complex 3d network topologies through a&#13;
procedural shape grammar. Finally, I explore the use of differentiable simulation in conjunction with machine learning techniques to optimize the geometry of tensile networks using&#13;
semantic input and to develop an implicit representation of the space of equal edge length&#13;
tensed network poses. Together, this new tool set and additional methods enable a more expansive exploration of the design space of tensile networks where design intent and practical&#13;
constraints are respected.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157354</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing frameworks for an equitable future: from building decarbonization to generative modeling.</title>
<link>https://hdl.handle.net/1721.1/157353</link>
<description>Developing frameworks for an equitable future: from building decarbonization to generative modeling.
De Simone, Zoe
In this thesis I develop computational frameworks to understand equity under two perspectives: building decarbonization policy and generative modeling.&#13;
&#13;
Part 1 - Equitable building decarbonization&#13;
Buildings significantly contribute to global carbon emissions, necessitating urgent decarbonization to meet 2050 climate targets. The U.S. strives for net-zero emissions by 2050, supported by federal incentives promoting building upgrades. However, financing deep retrofits for all U.S. homes exceeds available public funds. This chapter proposes a model that examines long-term carbon reduction trajectories under various incentive policies, focusing on fairness and equity. Using Oshkosh, WI, as a case study, it explores the philosophical, economic, political, and mathematical dimensions of creating just and effective decarbonization policies that ensure healthy, low-carbon homes for all.   &#13;
&#13;
Part 2 - Equitable diffusion models&#13;
Generative Text-to-Image (TTI) models, while capable of producing high-quality images, often replicate training data biases. Traditional fairness views in machine learning, which consider fairness as binary, are challenged. This section introduces DiffusionWorldViewer, a novel framework with a Web UI that enables users to analyze the underlying worldviews of diffusion models and edit model outputs to align with their personal fairness perspectives, thus promoting a diverse understanding of fairness in AI technologies.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157353</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Cost Masonry for the Design of Barrel-Vaulted Flooring Systems</title>
<link>https://hdl.handle.net/1721.1/157352</link>
<description>Low-Cost Masonry for the Design of Barrel-Vaulted Flooring Systems
Haile, Nebyu Samuel
The world's population is projected to grow rapidly in urban areas, with a projected 2.5 billion more urban dwellers by 2050 (UN-DESA, 2019). This urban growth will notably concentrate in Less Economically Developed Countries (LEDCs), where 16 of the top 20 most populous cities are anticipated to be situated by 2100 (Hoornweg &amp; Pope, 2017). LEDCs face a critical challenge in meeting the demand for affordable housing due to various factors, notably the high material costs, which can make up to 90% of residential construction expenses (Meikle, 2011). Most multi-story housing in LEDCs relies on reinforced concrete frames with flat slabs. This structurally inefficient system heavily depends on imported cement and steel for many locations. Compounding this issue, in LEDCs, the construction sector contributes significantly to their annual carbon emissions, sometimes doubling the global average and exacerbating the climate crisis (Yokoo et al., 2016). Addressing the pressing need for affordable housing requires alternative, more efficient structural systems that utilize affordable and environmentally conscious materials.&#13;
&#13;
This thesis aims to address the challenge of affordable housing by proposing the implementation of unreinforced barrel-vaulted earthen floor systems as an alternative to conventional concrete flat slabs, which are often cost-prohibitive in LEDCs. While existing research predominantly focuses on thin concrete shells for vaulted floors, this study emphasizes earthen vaulted floor systems, utilizing locally available and cost-effective materials. Specifically, it analyzes the maximum spanning capacity of three shallow unreinforced earthen barrel-vaulted floor typologies, examining their associated costs and carbon footprints. Furthermore, the thesis investigates the feasibility of one of these typologies by constructing and evaluating a physical 3m span prototype subjected to international building code loads. The outcomes highlight the structural integrity, cost-effectiveness, and reduced carbon footprint of earthen vaulted floor systems, offering insights into a more environmentally conscious and economically feasible floor system typology for building construction in LEDCs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157352</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Matter of the Hold: Housing futures and the paradigm of the ship</title>
<link>https://hdl.handle.net/1721.1/157351</link>
<description>The Matter of the Hold: Housing futures and the paradigm of the ship
Donovan, Inge; Pankhurst, David
Many of the port cities of North America are built upon ballast stones, discarded by ships after their transit across the Atlantic. Oftentimes, this material was sourced from waste, such as stone offcuts from quarrying, and transported across space and time, slipping through value systems; from waste, to weight, to commodity. In time, structures across the continent boasted chimneys or foundations that had begun their life in the distant granite quarries of Cornwall, and from bricks that had rounded Cape Horn - their material transience obscured by a perceived stability of form.&#13;
Buildings are usually seen as the endpoint of material flows, where they remain in intractable, fused assemblies until they reach obsolescence. This familiar pattern is currently playing out in the phased demolition of the Bunker Hill Public Housing Development, the largest affordable housing community on the East Coast. The BHHD can be seen in contrast to the Charlestown Navy Yard, an adjacent shipyard where centuries of investment have established a robust infrastructure of maintenance. We ask: how could the paradigm of the ship, and the creation of material strategies for large, complex assemblages funded by public spending be applied to housing in a resource constrained world?&#13;
In The Matter of the Hold, the demolition waste from Bunker Hill is inherited as ballast and transformed, a process made possible by the concept of the “building as hold.”&#13;
In light of the increasing shift towards buildings as storehouses of material to be held for future reuse, and as vessels of carbon sequestration, our thesis explores how design for the uneven, yet cyclical ebbs and flows of renewable resources erodes architecture’s traditionally rigid temporal boundaries of planning, construction, and occupancy, and produces temporally dynamic regimes of figure and form. The collection, administration and reconfiguration of waste materials results in the creation of new, regenerative forms of collective living that challenge the boom-and-bust logic of investment in public infrastructures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157351</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Moving Sculptures: Animating the Human Body in Stop-Motion WithPolymer Clay</title>
<link>https://hdl.handle.net/1721.1/157350</link>
<description>Moving Sculptures: Animating the Human Body in Stop-Motion WithPolymer Clay
Smerekanych, Eva B.
The purpose of this thesis is to explore novel approaches to stop-motion animation techniques and design and sculpt an original moveable clay figure utilizing those techniques. This thesis focuses on animating human anatomy, testing the extreme physical and emotional states that can be portrayed within the medium of a stop-motion film. Stop-motion animation is a technique wherein a film is shot frame by frame, with animators manually moving characters between each frame to create a sense of movement when the frames are played back sequentially. While there are many possible approaches to producing stop-motion animation, this thesis focuses entirely on hand-sculpted clay animation, due to the tactile nature of the medium and the artistic expression it allows. The motivation for this study is to find a way to bring sculptures to life in a way that does not sacrifice attention to detail. Over the course of this study, a series of experiments were carried out, each testing a different approach to claymation character design. Each experiment culminated in a short stop-motion clip demonstrating the unique design approach. The result of this thesis is a novel design for a moveable clay figure which is used as the main character in an original stop-motion short film. This thesis explores the entire design process behind creating a moveable clay sculpture, including all challenges and considerations that played a role in informing the final figure design.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157350</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stories of the Sky</title>
<link>https://hdl.handle.net/1721.1/157349</link>
<description>Stories of the Sky
Chen, Zhanyi
My art practice probes how soft science fiction provides intervals to contemplate the tension among the relentless advancement of infrastructural technologies, their environmental and psychological repercussions, and the metaphors and culture in weather and environments. In this thesis, I explore such tension with a specialized focus on the sky via a series of artworks that engage with clouds, weather satellites, and human feelings. My experience receiving image signals from the Russian weather satellite Meteor-M2 has led me to understand the pervasive presence of satellites and their silent integration into, and control over, various environments—similar to numerous other contemporary infrastructures. The sky has never been merely a smooth surface but is striated with all kinds of machines, politics, and power dynamics. My thesis can be seen as exploring methods of coping as responses from an individual caught in such an intermingled environment, and as an inquiry into how we perceive things that are distant from us. Referring to soft science fiction approaches, I strategically misuse technologies to prioritize human subjectivity over technological functionality. In moments where the misused technologies cease to function, but to obscure, to resist, to complicate, to affect, I put the current dynamics between the self and technologies into play. Parallel to my artistic practice, I also take inspiration from elemental media studies for their broader theoretical discourse on the interplay between the environment and media. Media historian John Durham Peters argues for a more encompassing definition of media that includes environmental elements, including the sky, challenging the traditional dichotomy between nature and culture and the previous academic emphasis on culture over nature. This perspective allows for the exploration and appreciation of the sky’s cultural, emotional, and historical values which are just as important, if not more so, than any other conventional media, resonating with the intentions behind my artworks. Thus, “media” becomes a term that is semantically richer than it already is and requires a nuanced interpretation embracing all its connotations, and my thesis provides ways to explore this materially. By focusing on the sky as a juncture where nature and culture collide, my thesis advocates for a synthesized view that recognize the multifaceted narratives woven through the sky—stories of technology, of culture, of grand dreams and of small melancholy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157349</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Music Interfaces with Soft Materials</title>
<link>https://hdl.handle.net/1721.1/157348</link>
<description>Enhancing Music Interfaces with Soft Materials
Yañez-Laguna, Diego
Despite the growing popularity of digital music instruments (DMIs) and relevant technological advances, accessibility and expressive potential remain significant challenges for musical interface designers. These issues stem from generic input-output mappings, sensor limitations, and a lack of physical connection between musicians and instruments. This thesis examines the benefits of incorporating soft materials into musical interfaces and why DMIs should be designed with musician-instrument relationships as a priority in order to enhance intuitiveness and expressiveness. This work culminates with design and analysis of a prototype that explores the potential of a foam user interface. Featuring pressure sensors embedded within foam blocks, the prototype encourages tactile interaction and gives the user nuanced control over various musical parameters. The modular design of the foam blocks allows for versatile configurations, enabling users to control multiple parameters simultaneously with simple, but responsive gestures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157348</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System-level design of low-carbon structures</title>
<link>https://hdl.handle.net/1721.1/157347</link>
<description>System-level design of low-carbon structures
Fang, Demi L.
“What is more likely to be associated with a reduction in emissions: switching from concrete to timber, or shortening the spans throughout the building?” While such insights are valuable for mitigating emissions from structural systems during early stages of design, it is difficult to answer these types of questions in current paradigms of performance-driven design. This dissertation makes several original contributions to the system-level design of low-carbon structures. First, a literature-supported network of strategies available to reduce emissions during early-stage structural design is established and evaluated on the bases of literature availability, impact, implementability, and compatibility. Material efficiency and material choice represent two key levers for reducing emissions in structural design, but it is difficult to navigate trade-offs between these strategies at a system level of structural design. Holistic design strategies can help achieve this, but these current paradigms of performance-driven design (e.g. deploying rules of thumb, comparing a few design options, and optimization) are limited in their capacity to inform decision-making towards higher performing designs. There is a particular opportunity to produce these insights using data-driven approaches given the growing quality and quantity of data in the field of low-carbon structural design. In response, this dissertation analyzes both types of data that are available in the field: wild data (measured from the industry) and synthetic data (produced from bottom-up parametric structural models). Data from over 200 fully designed structural systems from a structural engineering firm are analyzed. This analysis is the first to 1) provide empirical evidence for floors and foundations representing the largest opportunities for carbon reductions and 2) evaluate the relationship between structural material quantities and embodied carbon in structural systems (many analyses evaluate the latter without the former). In a field where material choice is a predominant impression for reducing emissions, these new insights importantly affirm the prominent role of material efficiency in reducing a structural system’s emissions. While the design space of wild data includes a diverse variety of projects, leveraging a synthetic dataset computed from a bottom-up parametric model helps produce insights specific to the design problem at hand. The final contribution of this dissertation is to propose a computational framework that leverages synthetic data to empower decision-making in design. The framework addresses two challenges: 1) the challenge of extracting decision-making insights from design data, and 2) the challenge of comparing decision-making across continuous (numerical) and categorical variables, which are typical in most design problems. In this framework, a machine learning model is trained on a provided set of design data to compute gradients across the design space. These gradients are distilled into “influence metrics”, which offer a novel, accessible way to build and supplement intuition on low-carbon design decisions. A few case studies in low-carbon structural design are presented to demonstrate the use of the proposed method with synthetic datasets. By striking a meaningful balance between applying rules of thumb and optimization, the method empowers a paradigm shift from performance-driven design to performance-informed, human-driven design. &#13;
Key words: embodied carbon of structural systems, design decision-making, low-carbon structural design
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157347</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-similar singularity formation and wellposedness theory for compressible fluids and dispersive PDE</title>
<link>https://hdl.handle.net/1721.1/157346</link>
<description>Self-similar singularity formation and wellposedness theory for compressible fluids and dispersive PDE
Cao Labora, Gonzalo
In this thesis, we study different problems related to singularity formation and local wellposedness of fluid equations and dispersive PDE. Regarding singularity formation, we construct radially symmetric smooth selfsimilar profiles for the compressible Euler equations which exhibit an implosion type singularity in finite time. This will be the first part of the thesis. The second part of the thesis consists on doing a non-radial stability analysis around those profiles to show singularity formation for adequate small perturbations of the profile. In particular, this stability analysis also allows to conclude existence of singularities for periodic initial data. The stability also allows to obtain singularity formation for the corresponding equation with dissipation: the compressible Navier-Stokes equations. Moreover, the self-similar profiles constructed are also intimately related to dispersive equations, and we will show how to use them to prove finite time singularity formation for some supercritical defocusing NLS equations, using its hydrodynamical formulation. The third part of the thesis consists of the study of a different dispersive equation: the Zakharov– Kuznetsov equation. The equation is a generalization of the KdV equation to higher dimensions with applications in plasma physics. We improve the deterministic local wellposedness in the cyilnder both in the deterministic and the probabilistic setting.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157346</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparing Parameter Efficient Finetuning Techniques (PEFT) using Datamodels</title>
<link>https://hdl.handle.net/1721.1/157345</link>
<description>Comparing Parameter Efficient Finetuning Techniques (PEFT) using Datamodels
Chamdal, Harshal
Advances in machine learning, particularly through algorithmic innovations and large datasets, have led to models with hundreds of billions of parameters. Deploying these models is challenging and costly, especially due to the extensive finetuning required. Parameter-efficient finetuning techniques (PEFT) have been proposed to address this issue by significantly reducing the number of trainable parameters, achieving comparable results to full-parameter finetuning. Despite widespread adoption, PEFT methods are often used interchangeably without considering their qualitative differences and performance under various data distributions. This thesis extensively compares three PEFT methods: LoRA, BitFit, and (IA)³, using the ModelDiff framework to identify and apply data interventions. Our analysis reveals that the performance of these methods varies widely with different interventions, with BitFit showing the most variance, while LoRA and (IA)³ demonstrate greater resilience. This study informs the selection and optimization of PEFT techniques based on specific NLP task requirements, balancing performance, computational efficiency, and robustness to text variations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157345</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media</title>
<link>https://hdl.handle.net/1721.1/157344</link>
<description>Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media
Akdoğan, Merve
Situated at the intersection of digital media studies, queer theory, and glitch art, this thesis critically examines the normative biases and centralization in artificial intelligence (AI) and, more specifically, machine learning systems as they relate to marginalized identities. Unlike conventional approaches that prioritize optimization and polishing of AI, this thesis introduces the notion of a glitch—a short-lived digital error—as both a metaphorical and an artistic technique that critically subverts societal norms. The thesis interrogates AI’s structure, dissecting it to reveal “black box” complexities to question the vulnerability of computational systems. It proposes an alternative approach that embraces error as a means of resistance, developing a critical commentary on technology production through artistic interventions. Grounded in Judith Butler’s “Matrix of Intelligibility,” the artistic interventions introduced in this thesis aim to craft a glitch aesthetic that integrates queer theoretical perspectives with practical machine learning applications. This thesis interrogates how AI models can potentially propagate entrenched societal norms about gender, what the political errors made by AI systems are and what can be the activist potential of technology in challenging these cisheteronormative renderings. Aiming to develop and test machine learning models for identifying bias in digital media, this research is organized into four sections, beginning with the development of a theoretical framework and a review of relevant literature on AI errors and glitch art. Subsequently, the thesis explores the design of glitch prototypes through training and testing machine learning models. Finally, through experiments using these methodologies, including archival work, media manipulation, and attribution studies with AI models, this thesis reveals the AI systems’ deficiencies as they relate to queer identities. This work underscores the transformative potential of integrating artistic techniques to subvert and reveal technological development. It envisions technology not merely as a mechanism for perfecting systems but as a powerful conduit for advocating a more inclusive future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157344</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Between City and Self: Reading Beirut in Mohamed Soueid’s Tango of Yearning</title>
<link>https://hdl.handle.net/1721.1/157343</link>
<description>Between City and Self: Reading Beirut in Mohamed Soueid’s Tango of Yearning
Anouti, Ghida
Set in Beirut in the aftermath of the Lebanese Civil War (1975-1990), the pseudo-documentary film Tango of Yearning (1998) follows the lives of several subjects who speak of love, loss, dreams, and cinema as they navigate their fragmented postwar city. Directed by underground Lebanese filmmaker Mohamed Soueid (b. 1959) and shot purely on video, the film is saturated with cinematic references, images of urban sites, sensual and religious symbols, and sociopolitical intimations. Soueid sees Tango of Yearning – the first in a trilogy titled Civil War – as an ‘obituary’ of his life prior to making this film. Hence, for him, the film is rooted in the past, yet I argue that it is a significant augury of Beirut itself as a palimpsest of urban memories sublimated by Soueid. This argument is nestled between Soueid’s assessment of his film as a personal work of cinema, and my own reception of it as symptomatic of Beirut’s history in the periods prior to, during, and after the Civil War.&#13;
Tango of Yearning is, at its core, a meditation on the city of Beirut as it transformed throughout various periods governed by the traumatic event of the Civil War. Through a close reading of the film, I reveal how an ostensibly private essay is also a medium for archiving memories either forgotten or suppressed by the nation’s contested amnesia of the war, while also investigating how the postwar city’s history intertwines with the filmmaker’s biography. A largely unrecognized yet significant contributor to the Arab world’s video and cinema scene, Soueid – an agent, actor, and narrator of the city – is one of the most sensitive chroniclers of life in Beirut during the 1990s and early 2000s. Weaving historical realism with fabulation to fill or distort representational lacuna, his film offers doubled lenses – one of his life and another of Beirut’s contemporary history. Through a chronological reading of an otherwise nonlinear film, I extract a history of Beirut in three stages: its cosmopolitan yet polarized 1960s with a brimming arts, film, and literature scene; its violent war characterized by sectarianism and fragmented nationalism; and its amnesic postwar era in which the film was created. Accordingly, I ask how Soueid’s private image-making apparatus draws an image of Beirut through his own autobiographical narration.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157343</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Damp Skin: Portraits of Taiwanese Domesticity, Resilience, and Otherness</title>
<link>https://hdl.handle.net/1721.1/157342</link>
<description>Damp Skin: Portraits of Taiwanese Domesticity, Resilience, and Otherness
Chan, Cheng-Hsin
This thesis is an intricate exploration of Taiwanese life under the constant dampness, weaving together the present with historical threads and personal memories of home and motherhood alongside broader socio-historical narratives. It examines Taiwanese domesticity through the dual prisms of “dampness” and “enclosure failure” to reveal how these elements influence or fail to meet Taiwanese people’s physical comfort and needs. Central to this research is exploring the historical marginalization of the Taiwanese body in domestic spatial development under the influence of external powers.&#13;
&#13;
Damp Skin unfolds through three intertwined registers that offer diverse materials and perspectives spanning time and space, providing a layered understanding of Taiwanese history and contemporary experiences: I. Home, Memory, and Motherhood, II. Planetary Climate and Body, and III. Domesticity and Architectural Enclosure in Taiwan. This thesis argues the continuous repositioning of our bodies (ourselves and family) in response to external factors — climate, society, and power. It serves to revisit the past, document the present, and speculate on the future, enhancing our understanding of everyday life in Taiwan and exploring potential cultural adaptations. Each thread collects materials and offers distinct perspectives on Taiwanese identity and space’s historical and contemporary shaping. Together, they form portraits of the complexities and nuances of Taiwanese domesticity, resilience, and otherness, framed through the intimate and expansive lens of dampness and enclosure.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157342</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cycles of aMaízing Things</title>
<link>https://hdl.handle.net/1721.1/157341</link>
<description>The Cycles of aMaízing Things
del Busto, Juan Manuel Chávez Fernández
Throughout this thesis, maíz becomes a trans-scalar agent of exchange across time, cultures, and territories. Maíz, as both a symbol and a subject, is intensely charged with tradition and disruption, operating within a jumbled feedback state that transcends myths and industry. The work situates my reading of the artwork, Río Revuelto by the Mexican artist José Chávez Morado (1949), as a guiding framework to approach a kaleidoscopic entanglement of different narratives. Considered under four different lenses: the cosmological, the national identity, the resistance, and the product, I argue for constant feedback between them for the re-transforming cycles of maíz. The crucial concern driving this exploration is how maíz and humans are ingrained into each other's systems — re-configuring methods, spaces, and forms of display. The display refers not only to maíz as a ‘product’ but as a continuous entity in transition, transforming and adapting to the social and cultural conditions where it circulates —whether through myth, ritual, portrayal, strategy of preservation, building typology, commodity, by-product, or history. The design approach is presented through performative artifacts that symbolize the systems through which maíz circulates. They are further represented in an essay film. Whether referencing myths, projections, displays, or products, the artifacts become mnemonic objects to think with—depicting the cycles of maíz as a world-building exercise. Maíz becomes the point that traces simultaneity in the history of humanity, representing a symbol eternally under construction. Acknowledging this monumental scale requires my work to be only a grain-sized glimpse of speculative potentials in design.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157341</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene</title>
<link>https://hdl.handle.net/1721.1/157340</link>
<description>Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene
Cheng, Chi-Li
This thesis introduces "Alive Scene," an online participatory platform for recording dynamic 3D environments and building collective interpretations of objects, events, and atmospheres within them. For instance, a user can browse a recording of a room and describe objects or events to locate them; or select a time frame, adjust the camera angle, and add a comment to share a new narrative of the scene with others. Unlike traditional digital formats such as simple videos or 3D models, this platform is both three-dimensional and temporal at the same time, and the views are searchable using natural language sentences and sorted by relevance. By building the platform and testing it with human subjects, this thesis demonstrates that such a new participatory media of dynamic 3D environments fosters communal knowledge and enhances the spatial understanding of individual users. Alive Scene produces rich, semantic-level communication among users, akin to the dynamic propagation of cultural memes. The Alive Scene System integrates two advanced techniques: 3D scene reconstruction using Gaussian splatting, and semantic linking of human perceptions through the Contrastive Language-Image Pretraining (CLIP) model. These methods are currently among the most popular and efficient. The platform continually enriches its collection of users' views and interpretations through interactions with this semantic AI system, enabling the archiving of user inputs and suggesting new avenues for exploring diverse perspectives. The streamlined interaction interface promotes user engagement and facilitates the discovery of related views and perceptions. The user test employs a dynamic 3D scene of a student lounge, recorded at four different times, and involves 20 participants generating a total of 235 inputs. Four types of interactive behaviors were observed regarding users' views and interpretations: Disagreement, Simple Agreement, Sharing Perception by adding comments, and Adjusting Views. The analysis indicates evolutionary trends: Initially, users’ express disagreements and provide objective, general comments. As the platform gathers these inputs, a transition occurs where users begin sharing more subjective information and reinterpreting others' views. Eventually, users adjust camera angles when the captions are agreeable. Visualizations of this analysis illustrate that these dynamic behavioral changes facilitate the development of collective perception. For further investigations, this study could benefit from incorporating more elaborate 3D scenes, additional recording times, and a larger number of participants.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157340</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond the Bioclimatic Chart: An Automated Simulation-Based Method for Assessing Natural Ventilation and Passive Design Potential</title>
<link>https://hdl.handle.net/1721.1/157339</link>
<description>Beyond the Bioclimatic Chart: An Automated Simulation-Based Method for Assessing Natural Ventilation and Passive Design Potential
Herb, Svenja
Technological advancements in the building industry have significantly transformed climate and comfort control in buildings. This allows for air conditioning in deserts and heating in the Arctic, ensuring occupant comfort. This innovation, however, has contributed to a homogenization in architectural designs globally, from the hot climates of Mumbai to the cold environments of Boston, and moderate settings like London. Such uniformity often overlooks local climatic conditions, resulting in increased energy consumption and elevated greenhouse gas emissions. Climate-responsive design on the other hand creates solutions that leverage local climates—such as through natural ventilation and optimal solar gain management—to reduce energy consumption. Depending on climate and program, the coordinated use of these passive design strategies may or may not lead to indoor thermal comfort conditions without the need for an air-conditioning system. There are two primary approaches to explore the passive design potential of a building during schematic design: Bioclimatic chart and building energy modeling (BEM). The former method is a key feature in building science textbooks and is solely based on widely available local weather data. It provides general design advice without requiring previous knowledge or the need to describe the building program. BEMs facilitate detailed testing of how a building is operated and how the above listed passive design techniques can be combined to obtain the highest possible comfort conditions and energy savings. However, the use of BEM has traditionally been more complex and time-consuming to use as it requires significant knowledge of the underlying building physics and numeric methods to mimic them. This thesis evaluates the bioclimatic chart's accuracy in predicting overheating hours associated with various passive design strategies, through comparison with BEM data. Furthermore, it introduces a new simulation-based approach called “ECOmpass”. ECOmpass automates early-stage design simulations and offers design recommendations for passive strategies with just one click.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157339</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alternate Imaginaries for the Kinara: River Ravi’s Edge as a Threshold</title>
<link>https://hdl.handle.net/1721.1/157338</link>
<description>Alternate Imaginaries for the Kinara: River Ravi’s Edge as a Threshold
Khalil, Mahwish
In the lower riparian landscape of Punjab, Pakistan, various communities confront the challenges of living within the active floodplain of river Ravi as it flows alongside the city of Lahore. These communities navigate the dissonances of the river’s edge—its Kinara, marked and molded by persistent colonial (mis)representations rooted in practices of erasure and division. Stepping away from historical depictions that have reduced the river to a mere resource for acquisition, this thesis engages with design and the oral tradition of storytelling, known as Qissa Khwani, to propose new modes of knowing, witnessing, and ultimately, cultivating alternative imaginaries for Ravi. This thesis seeks to illuminate the overlooked narratives of a river and its communities by drawing inspiration from, and centering the voices and legacies of, those most impacted by regressive depictions of a linear floodplain. It stages newer encounters and engagements with Ravi and its communities by stitching together stories of numerous community members, the dwellers, the boatmen, and the civil defense divers, actively defying and transforming the seemingly static Kinara—their home—through cultural and economic production. These pluralistic alternatives serve as a deliberate departure from the current large-scale, mega-urban development projects planned for the riverfront, which not only overlook the communities living along its banks but also employ idealized depictions of Ravi to attract capital. Finally, this thesis questions how the river's edge can be remapped to allow for the dismantling of top-down visions while addressing an urgency embodied within the shallow, receding flows of a polluted river, whose uncertain future remains contingent on distinct lines.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157338</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disrupting Monocultural Tendencies through&#13;
Multimodal Montage</title>
<link>https://hdl.handle.net/1721.1/157337</link>
<description>Disrupting Monocultural Tendencies through&#13;
Multimodal Montage
Singha, Mrinalini
This thesis contends with the pervasive impact of monocultural tendencies as manifested in the political, cultural, and media landscapes of contemporary India, particularly focusing on the unfolding context of 2024. Amidst an intensifying crisis marked by polarization, historical erasure, and the rise of hegemonic nationalism, this thesis posits art, particularly through the framework of `multi-modal montage,' as a agent of political disruption for `redistributing the sensible.' Tracing the aesthetic and political evolution of montage from its early 20th-century origins in Soviet cinema to its contemporary forms, the thesis outlines the transition from montage defined by collision and conflict to the soft, spatial, and interactive practices of figures such as Nam June Paik and Harun Farocki. It further investigates how `surface tension' and `unquiet objects' manifest within the multi-modal montage in the works of artists like Nalini Malani, Krzysztof Wodiczko, Shilpa Gupta and Nida Sinnokrot.&#13;
&#13;
As an Indian artist, the author situates her own practice within this discourse, highlighting projects such as `The Whistleblower' (2023), a tangible archive within an everyday object, and `A Mystery for You' (2023-24), a fact-checking game that merges a tangible interface with a large language model (LLM). These works exemplify the thesis's argument that artistic interventions can critically challenge and reframe dominant sociopolitical narratives, offering new perspectives and resistances against the monocultural hegemonies. Extending this analysis, the author discusses her exhibition 'Forensic Artifacts of a Democracy in Crisis' (2023) as an operative space. Through a curated assemblage of works, the exhibition provided a physical space for interaction, reflection, and conversation, enabling audiences to engage with the themes of the thesis viscerally. In all, this thesis argues for the critical role of art in challenging memory and forgetting, from fabricated histories to the fall and rise of monuments. From the polarization of media to the flattening of identities, of echo-chambers and absences and grand narratives.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157337</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Contract, the Contractor, and the Capitalization of American Building</title>
<link>https://hdl.handle.net/1721.1/157336</link>
<description>The Contract, the Contractor, and the Capitalization of American Building
Spencer, Chelsea Anne
The heroic claims of twentieth-century architects notwithstanding, modern American architecture was built by general contractors. This new type of builder was unknown to US Americans before the Civil War, but by the turn of the twentieth century they commanded a powerful position in the widening gulf between architects and the construction of their buildings. Operating at the critical inflection point between projection and materialization, paper and concrete, contractors appealed to investment-minded clients as fellow businessmen, offering them what neither craft builders nor professional architects could deliver: a completed building, for a fixed price, on a guaranteed schedule.&#13;
&#13;
This dissertation tells the story of how building became contracting in the United States during the long nineteenth century. Known to legal historians as the age of contract, the nineteenth century gave rise to a constellation of juridical and economic ideas that revolved around a vision of social relations modeled on market exchange and possessive individualism. Revealing the ideological and institutional foundations of today’s construction industry, the dissertation shows how nineteenth-century thinking about contract, freedom, value, and risk shaped the architectural building contract, the limits of the architecture profession, the practice of general contracting, and thus the modern relationship between architecture and building.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157336</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing (with) Trees: Active Agents in Architectural Production</title>
<link>https://hdl.handle.net/1721.1/157335</link>
<description>Designing (with) Trees: Active Agents in Architectural Production
Garinois, Laura-India
This thesis embarks on a multifaceted exploration of the relationship between urban trees, architectural representation, and the legal framework governing their existence, with a particular focus on tree hearings in Boston as a platform for this study. Against the backdrop of capitalist influences shaping urban landscapes, standardized modes of representation often prioritize economic interests, relegating urban trees to two-dimensional depictions in architectural drawings. Such representations obscure the rich complexity and ecological significance of trees, thereby shaping design choices that threaten their vitality. Amidst these challenges, Massachusetts has initiated efforts towards granting public trees legal recognition, providing a foundation upon which this study builds on to advocate for further improvements in tree rights and protections. This encompasses tree hearings, where developers and residents seek permission for the removal of healthy public trees, involving municipal authorities, tree wardens, and local communities. Through extensive dialogue with experts and stakeholders dedicated to this cause, the thesis identifies loopholes within existing laws and institutional frameworks, leading to the development of a tree appraisal system that employs alternate representations of trees that encourage new ways of valuing their role within architectural thinking and production. The exploration examines how a more nuanced collaboration with trees in design processes can enhance the value of architecture, and how design can in turn contribute to the protection of trees. Ultimately, the goal is to enrich tree hearing conversations by recognizing them as reflections of a larger climate conversation around trees and nature. By intervening in their legal site and imagination, the thesis fosters a more inclusive dialogue that transcends the binary decision of whether to cut down a tree or not.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157335</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>What is Ecology?</title>
<link>https://hdl.handle.net/1721.1/157334</link>
<description>What is Ecology?
James, Aubrie R. M.
There are many ways to try to make sense of that which is. Ecology, which deals with organisms in relation to their environments, makes sense of that which is through the study of relations among and between organisms and their environments. Modern ecology is predominantly understood as a scientific enterprise. However, science as a methodology is too often aligned and entangled with extractive, capitalist logics: the cycle of enclosure–dispossession-scientific practice-imperial expansion not only undergirds and defines the ecological crises of our times but forecloses our ability to conceive of the diverse ways in which life is configured. For ecology, this is a predicament of ethics, yes, but also of a cleareyed understanding of what is (and our relationship to it). The urgent question for ecologists given this predicament is to ask is how to break out of this cycle. This thesis explores the potential of building an artistic practice to question the forms of ecology: how it is conducted, how it is communicated, and what it produces. Drawing inspiration variously from feminist, postcolonial, and ecosocial art, media theory, and philosophy, this thesis probes the limits of ecology under the suspicion that the point of leverage for change is to differently enact how we think, make, and do in relation to the world in, around, and constituting us.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157334</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Much Does It Really Cost? A Dynamic Approach to Building Retrofit Costs for Decarbonization Pathways</title>
<link>https://hdl.handle.net/1721.1/157333</link>
<description>How Much Does It Really Cost? A Dynamic Approach to Building Retrofit Costs for Decarbonization Pathways
Kirkeby, Amanda
Carbon emissions are driving the planet out of its delicate Goldilocks balance. Evidence and the call-to-action date back to 1896 with Swedish scientist Svante Arrhenius and his seminal paper that first predicted the effect of carbon dioxide on the global temperatures. With the Intergovernmental Panel on Climate Change (IPCC) goal of global net zero emissions by 2050, the urgency is stronger than ever. An ever-growing number of municipalities are setting pledges to do their part, often without a concrete plan. With buildings accounting for 40% of total global emissions, building retrofits are a key component to these pathways to zero carbon. Urban building energy modeling (UBEM) research efforts have developed physics-based decision-making tools to define city-scale technology pathways to reach climate goals. However, a crucial question in making these pathways actionable has been largely neglected: how much does it really cost? The scarcity of contemporary cost data and methods for cost prediction at the urban scale makes this question difficult, and further questions around equitable incentive programs nearly impossible to answer. This work demonstrates the concept and relevance of implementing a dynamic cost model in the UBEM context. Several cost models are applied to a case study of 13,000 residences in Oshkosh, WI to predict costs for homeowners to retrofit their homes with three different upgrade packages. A willingness to pay analysis is then performed with upfront cost predictions from different models, illustrating the impact a more robust cost model may have in providing more realistic predictions of an upgrade strategy’s techno-economic success. Through its compatibility with existing UBEM frameworks and local input costs, the dynamic building upgrade cost model hosts the potential to further support municipalities in developing economically feasible building retrofit strategies for decarbonization pathways.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157333</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pathways to Net Zero: Financing Strategies For Low-Income Homeowners</title>
<link>https://hdl.handle.net/1721.1/157332</link>
<description>Pathways to Net Zero: Financing Strategies For Low-Income Homeowners
Moore, Lauren
Housing retrofits are crucial for accomplishing national housing sector decarbonization goals. Single measure retrofit improvements are not sufficient for low-income homes which are often in less-thanoptimal condition and are subsequently uncomfortable and expensive to operate. Comprehensive retrofit approaches are necessary to achieve the energy efficiency targets for the aging housing stock. Historic educational and economic barriers pose challenges for incentivizing low-income homeowners to retrofit their homes. Proactive strategizing that considers both educational and economic factors are needed to see increased retrofit adoption amongst these groups. Policy makers need an understanding of retrofit impact for more effective resource allocation and homeowners need better incentives, and tools to conceptualize the benefits, time commitment and cost associated with deep retrofits. To address this problem, we present a retrofit pathway modeling framework to accurately predict the time required to achieve comprehensive retrofits for the homeowner. Taking retrofit cost and annual energy saving into account, we are proposing a new Government sponsored and led financing program inspired by the successful 401(k) retirement plans and level 529 saving programs, which offers an either 2x or 3x match to the annual investment the homeowner commits to saving each year to ensure low-income homeowners are accounted for in the journey to building sector decarbonization by 2050 and beyond. For a case study home in the Grove Park neighborhood located in Atlanta, Georgia, hot water heat pump retrofits are the most impactful on building annual energy use but retrofits that have low cost and short payback periods such as installing LED light fixtures and low-flow showerheads are the recommended have the largest potential for shortening the years required to achieve comprehensive retrofits and therefore, are recommended for policy makers to incentivize in the community. Strategic financing can be used to ensure a financially feasible pathways for homeowners with varying annual budget amounts. For the example home, the program allows homeowners who invest only $50 annually to achieve comprehensive retrofits four times faster than if they only utilize existing incentive programs. Individual building energy simulation combined with socioeconomic analyses are needed to meet the needs of diverse low-income communities across the United States.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157332</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Exploration of Origami Tessellation Design:Harnessing Shape Grammar for Flexible Folding Structures</title>
<link>https://hdl.handle.net/1721.1/157331</link>
<description>Computational Exploration of Origami Tessellation Design:Harnessing Shape Grammar for Flexible Folding Structures
Qiu, Lingyi
Origami tessellation, with its intricate folding patterns, presents a unique blend of artistic expression and engineering application. However, the design process often proves daunting due to its complexity, limiting accessibility to enthusiasts and impeding its potential impact in engineering and architecture fields. This thesis aims to lower the barrier of origami tessellation pattern design by leveraging shape grammar principles. Shape grammar provides a systematic framework for generating and analyzing folding patterns, offering a more intuitive and structured approach to design. Through computational exploration and experimentation, this research demonstrates the efficacy of shape grammar in creating diverse and innovative origami tessellation patterns. By streamlining the design process, this approach not only enhances the experience for origami enthusiasts but also opens up new avenues for engineering and architecture applications, including deployable structures, flexible materials, and adaptive systems. The integration of shape grammar into origami tessellation design has the potential to catalyze advancements in both artistic expression and practical utility, fostering creativity and innovation in diverse fields.&#13;
Key words: computational origami design, origami tessellation, shape grammar
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157331</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Liquid to Stone: Reimagining the design of concrete structures for reuse</title>
<link>https://hdl.handle.net/1721.1/157330</link>
<description>From Liquid to Stone: Reimagining the design of concrete structures for reuse
Donovan, Inge; Schnitzler, Jenna
Every year, 360 million metric tons of concrete construction waste are sent to landfill in the United States, in large part originating from the demolition of economically obsolete buildings. Meanwhile, global demand for new concrete is accelerating – in 2021, the production of new concrete was responsible for up to 9% of global CO2e emissions, and our dependence on concrete is only expected to rise over the next 50 years.&#13;
Concrete’s ubiquity is reinforced by its liquidity; it is simultaneously invisible and ever-present, undergirding global modernization through its cheap, local nature and its ability to take on any form in quick order. However, design with concrete has remained mostly unchanged, with inefficient, irreversibly fused structures cast in place to meet quickly changing programmatic needs, few of which survive longer than 30-50 years. Due to its careless application, concrete is perceived as a low-value material, and is therefore used wastefully, discarded quickly, and usually downcycled. The monolithic and inflexible nature of reinforced concrete structures perpetuate concrete’s culture of obsolescence and demolition.&#13;
To meet emissions targets and demand for building, we need to close the loop by developing a circular economy of structural materials. Instead of reusing salvage materials that have already entered the waste stream, this thesis confronts the design of new concrete structures directly, presenting the design of and methodology behind Pixelframe, a precast kit of parts for reconfigurable concrete structures. In a future where buildings are increasingly seen as stockpiles for subsequent reuse, the reinvention of concrete structures is an imperative that presents an opportunity for a new tectonic – concrete is no longer a liquid poured once and cured on site, but instead is a material more akin to stone, retaining value across multiple lifespans.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157330</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parametric PAINTOVER: Generating Design Models via Image Encoders and Latent Trajectories</title>
<link>https://hdl.handle.net/1721.1/157329</link>
<description>Parametric PAINTOVER: Generating Design Models via Image Encoders and Latent Trajectories
Tas, Demircan
Design is an iterative process where physical or virtual prototypes are created, rendered, evaluated and modified repeatedly. Sketches and direct manipulations are made on the rendered or fabricated mediums to create and communicate intended changes. Parametric design is a prominent paradigm in design and architecture where hand crafted functions map input parameters to a design space to rapidly generate samples. Direct modifications often lead to novel states outside the design space of a parametric model. Moreover, Parametric models are not cyclic, their input and output spaces are not interchangeable without human intervention. Models must be reconfigured to accommodate out-of-domain changes, preventing parametric design tools from being integrated into early phases of design where changes are commonplace. We propose latent spaces of large pre-trained auto-encoders as shared, design spaces for translating states of design among mediums and dimensions. We implement rendering and image encoding to use images as an interface among the outputs and inputs of the model, enabling users with direct modification via painting over. We use sketches, renderings, and 3d models for sampling latent spaces. We share experiment results acquired through linear interpolation and a custom spline implementation in latent spaces. We present samples from found latent trajectories matching to samples from ground truth parametric design models. We find that trajectories exist in latent spaces that approximate axes in parameter spaces. Using images and 3d models as input and output, we provide a cyclic, software agnostic tool for design generation with parameter approximation capabilities that generalize. We provide findings from experiments and present a software repository for parametric paintover including our sketch augmentation model Inverse Drawings and many-dimensional latent spline implementation L-NURBS.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157329</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling technology pathways and retrofit adoption to achieve city-wide building emissions reduction goals</title>
<link>https://hdl.handle.net/1721.1/157328</link>
<description>Modeling technology pathways and retrofit adoption to achieve city-wide building emissions reduction goals
Berzolla, Zachary M.
Achieving net zero emissions from buildings by 2050 is an unprecedented challenge that will require an all-in effort at local, state, federal, and international levels. The exact path to reach this goal in existing buildings varies widely from one community to another. Thus local planning efforts and a bottom-up approach is needed to attain emissions reduction goals. This dissertation lays out a framework to create technology pathway roadmaps to help cities around the world identify actionable strategies to achieve their building emissions reduction goals. These “technical potential” roadmaps can help policymakers quantify the exact requirements in terms of retrofits, workforce, and material to attain their end goals. The application of these tools in 24 cities around the world are discussed. A sound roadmap is only as good as its implementation, and currently retrofit rates lag what is necessary to achieve 2050 goals on time. One of the oft-cited barriers to retrofit adoption is the high upfront cost. This dissertation documents a survey carried out by the author and the resulting model used to help quantify households’ willingness to pay for retrofits. Leveraging the willingness to pay model enables policymakers to analyze the techno-economic pathways to their goals. Finally, one of the greatest challenges to achieve emissions reduction goals is the timeline of retrofit adoption. Under the current business as usual retrofitting rate, less than a fifth of the building stock will be retrofitted by 2050. To help policymakers grasp this temporal challenge, this dissertation introduces a novel application of technology diffusion models that can quantify retrofit adoption over time. The tools developed in this dissertation are aimed at providing communities of all sizes with data-driven insights to meet their ambitious but necessary building-related decarbonization goals in a timely manner.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157328</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementation of Machine Connectivity in Low-Volume High&#13;
Variety Manufacturing Line</title>
<link>https://hdl.handle.net/1721.1/157327</link>
<description>Implementation of Machine Connectivity in Low-Volume High&#13;
Variety Manufacturing Line
Pal, Kanishk
This thesis provides a comprehensive analysis and implementation plan for enhancing machine connectivity within a manufacturing facility at SLB. The study investigates the existing limitations of the facility's connectivity infrastructure and proposes an advanced connectivity software suite as a solution, presenting a compelling business case for its implementation. The software’s scope involved DNC (direct numerical control), allowing for line-by-line feeding of CNC code to machine controllers, as well as machine data collection for real-time shop floor monitoring. The research emphasizes the development and implementation of an advanced network infrastructure designed to improve efficiency, security, and data handling capabilities. There is discussion regarding cybersecurity practices, specifically those related to industrial control systems that leverage CNC machining processes. The software implementation process is detailed, highlighting the necessary steps and information required for successful integration. These include: 1) securing connection to critical CNC machine controllers, 2) acquisition of hardware including local server and network switch, 3) server bring-up through remote imaging and installation of standard monitoring tools and 4) implementation of software on edge devices for CNC file transfer and machining data collection. Additionally, the thesis discusses the limitations encountered during implementation and outlines future steps to address these challenges.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157327</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Words to Worlds: Bridging Language and Thought</title>
<link>https://hdl.handle.net/1721.1/157326</link>
<description>From Words to Worlds: Bridging Language and Thought
Wong, Lionel Catherine
What do we understand when we understand language? Human language offers a broad window into the landscape of our thoughts. We talk about what we see, believe, and imagine, posing questions and communicating our plans. Language, in turn, stocks our mental inventories with new concepts and theories, communicating ideas that we might not otherwise have discovered by thinking on our own even over the course of a lifetime. How do we make meaning from language, and how, in turn, does the meaning we construct from language draw on the other resources and capacities of human thought, from perception, to mental simulation and decision making? This thesis proposes a computational framework for modeling language-informed thinking, organized into two parts. In the first, I overview the overarching framework that makes up the backbone of this thesis, Rational Meaning Construction, which proposes how natural language can construct arbitrary expressions in a flexible, symbolic, and probabilistic language of thought that supports general inferences. I present examples and experiments demonstrating the range of this theory, modeling how concrete propositions and questions in language can update and query beliefs about many different domains of knowledge. In the second section, I turn to language that communicates more abstract conceptual knowledge – generic background concepts and theories that we can learn from language, and which give us building blocks for representing more concrete beliefs. I present three models that build on the basic premises of Rational Meaning Construction to learn new lexical concepts and theories from language. The first models how we can learn new theories from generic sentences that explicitly communicate or implicitly presuppose abstract knowledge. The second elaborates on this model to also incorporate environmental feedback alongside information from language. The third suggests how we can learn the meanings of new words from scratch, with very little linguistic data, using principles of both representational and communicative efficiency to guide learning. I conclude by discussing a open questions that this thesis raises about how we learn and understand language, and outline future directions that might make progress on answering them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157326</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wrought iron seattice bridge</title>
<link>https://hdl.handle.net/1721.1/157309</link>
<description>Wrought iron seattice bridge
Church, Christopher A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157309</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lattice girder bridge</title>
<link>https://hdl.handle.net/1721.1/157308</link>
<description>Lattice girder bridge
Burrison, Henry K.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157308</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capitalization of electric railways</title>
<link>https://hdl.handle.net/1721.1/157307</link>
<description>Capitalization of electric railways
Zee, J. Zohn.; Zi, Su.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1915
</description>
<pubDate>Fri, 01 Jan 1915 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157307</guid>
<dc:date>1915-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low temperature deformation of copper single crystals oriented for multiple slip</title>
<link>https://hdl.handle.net/1721.1/157306</link>
<description>Low temperature deformation of copper single crystals oriented for multiple slip
Saimoto, Shigeo.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Metallurgy, 1964; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157306</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sobolev tests for uniformity on compact Riemannian manifolds,</title>
<link>https://hdl.handle.net/1721.1/157305</link>
<description>Sobolev tests for uniformity on compact Riemannian manifolds,
Giné, Evarist,
            1944-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1973; Vita.; Includes bibliographical references.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157305</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dietary fatty acids, prostaglandins and infectious and endotoxin challenges in guinea pigs</title>
<link>https://hdl.handle.net/1721.1/157304</link>
<description>Dietary fatty acids, prostaglandins and infectious and endotoxin challenges in guinea pigs
Mascioli, Edward A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1984; Vita.; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157304</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The infrared spectrum of solid methanol</title>
<link>https://hdl.handle.net/1721.1/157303</link>
<description>The infrared spectrum of solid methanol
Salter, Leonard P.
            (Leonard Paul)
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1948; Includes bibliographical references (leaves 31-32).
</description>
<pubDate>Thu, 01 Jan 1948 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157303</guid>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Telecommunication industry in Japan : comparative study of telephone market history between U.S. and Japan.</title>
<link>https://hdl.handle.net/1721.1/157302</link>
<description>Telecommunication industry in Japan : comparative study of telephone market history between U.S. and Japan.
Yamanouchi, Ichiro.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1976; Bibliography: leaves 153-155.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157302</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The design of integrated distributed amplifiers</title>
<link>https://hdl.handle.net/1721.1/157301</link>
<description>The design of integrated distributed amplifiers
McHarg, Jeffrey Clay.
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1980; Bibliography: leaf 96.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157301</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steady spinning of synthetic silk-like fibers and transient filament stretching of semi-dilute and concentrated polymeric fluids</title>
<link>https://hdl.handle.net/1721.1/157300</link>
<description>Steady spinning of synthetic silk-like fibers and transient filament stretching of semi-dilute and concentrated polymeric fluids
Brauner, Octavia Flora,
            1975-
Thesis: S.M., Massachusetts Institute of Technology, Department of Chemical Engineering, 2001; Includes bibliographical references (p. 111-115).
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157300</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The effects of taxing unemployment benefits</title>
<link>https://hdl.handle.net/1721.1/157299</link>
<description>The effects of taxing unemployment benefits
Ellis, W. Philip.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1999; "September 1999."; Includes bibliographical references.
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157299</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>What makes the money go round?</title>
<link>https://hdl.handle.net/1721.1/157298</link>
<description>What makes the money go round?
Rosenblat, Tanya,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (p. 129-133).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157298</guid>
</item>
<item>
<title>Essays on the economics of work and family</title>
<link>https://hdl.handle.net/1721.1/157297</link>
<description>Essays on the economics of work and family
Johnson, John H.
            (John Henry),
            1973-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (p. 117-120).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157297</guid>
</item>
<item>
<title>Essays on international macroeconomics : the real exchange rate, income inequality and the international investment position of small countries</title>
<link>https://hdl.handle.net/1721.1/157296</link>
<description>Essays on international macroeconomics : the real exchange rate, income inequality and the international investment position of small countries
García, Pablo S.
            (Pablo Silva),
            1970-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157296</guid>
</item>
<item>
<title>Essays on crises</title>
<link>https://hdl.handle.net/1721.1/157295</link>
<description>Essays on crises
Dudek, Maciej Konrad,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157295</guid>
</item>
<item>
<title>Social security, pensions, and the retirement decisions of individuals and couples</title>
<link>https://hdl.handle.net/1721.1/157294</link>
<description>Social security, pensions, and the retirement decisions of individuals and couples
Coile, Courtney.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157294</guid>
</item>
<item>
<title>Repeated games with private information</title>
<link>https://hdl.handle.net/1721.1/157293</link>
<description>Repeated games with private information
Amarante, Massimiliano,
            1966-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (leaves 56-59).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157293</guid>
</item>
<item>
<title>The information content of asset prices and emerging market crises</title>
<link>https://hdl.handle.net/1721.1/157292</link>
<description>The information content of asset prices and emerging market crises
Aguiar, Mark.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, c1999; Includes bibliographical references (p. 91-95).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157292</guid>
</item>
<item>
<title>Essays on market integration and productivity</title>
<link>https://hdl.handle.net/1721.1/157291</link>
<description>Essays on market integration and productivity
Park, Charles C.,
            1968-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, February 1998; Includes bibliographical references (leaves 110-112).
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157291</guid>
</item>
<item>
<title>Three essays on search and bargaining models</title>
<link>https://hdl.handle.net/1721.1/157290</link>
<description>Three essays on search and bargaining models
Dasgupta, Sugato.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1999; Includes bibliographical references (leaves 92-94).
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157290</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays on information production and information use in financial markets</title>
<link>https://hdl.handle.net/1721.1/157289</link>
<description>Essays on information production and information use in financial markets
Solomon, Amit.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1998; Includes bibliographical references.
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157289</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Working of two classes of silver lead ore from the Merrimac Mining Co's Code at Newbury, Mass.</title>
<link>https://hdl.handle.net/1721.1/157288</link>
<description>Working of two classes of silver lead ore from the Merrimac Mining Co's Code at Newbury, Mass.
Townsend, Walter Davis,
            1856-1918.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157288</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Richmond charcoal iron furnace</title>
<link>https://hdl.handle.net/1721.1/157287</link>
<description>The Richmond charcoal iron furnace
Robinson, Thos. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157287</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metallurgical treatment of air argentiferous galewa from Burleigh Tunnel, Colorado</title>
<link>https://hdl.handle.net/1721.1/157286</link>
<description>Metallurgical treatment of air argentiferous galewa from Burleigh Tunnel, Colorado
James, Samuel.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157286</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An article on salt</title>
<link>https://hdl.handle.net/1721.1/157285</link>
<description>An article on salt
Burnet, Moses D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157285</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the dressing and metallurgical treatment of an argentiferous lead ore from Georgetown, Colorado as performed at the Mining Laboratory of the M. I. T.</title>
<link>https://hdl.handle.net/1721.1/157284</link>
<description>Report on the dressing and metallurgical treatment of an argentiferous lead ore from Georgetown, Colorado as performed at the Mining Laboratory of the M. I. T.
Jackson, F. H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157284</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paper mills and machinery</title>
<link>https://hdl.handle.net/1721.1/157283</link>
<description>Paper mills and machinery
Hollingsworth, S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157283</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essex Company's Dam, acrop the Merrimac River at Lawrence</title>
<link>https://hdl.handle.net/1721.1/157282</link>
<description>Essex Company's Dam, acrop the Merrimac River at Lawrence
Sargent, W. F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157282</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Review of the Fall River Water Works</title>
<link>https://hdl.handle.net/1721.1/157281</link>
<description>Review of the Fall River Water Works
Allen, Samuel E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157281</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Holyoke Dam</title>
<link>https://hdl.handle.net/1721.1/157280</link>
<description>Holyoke Dam
Huntington, W. F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157280</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brookline Water Works</title>
<link>https://hdl.handle.net/1721.1/157279</link>
<description>Brookline Water Works
Handy, Edward A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157279</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tunnelling</title>
<link>https://hdl.handle.net/1721.1/157278</link>
<description>Tunnelling
Hammatt, Edw. A. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157278</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A wrought iron bowstring girder</title>
<link>https://hdl.handle.net/1721.1/157277</link>
<description>A wrought iron bowstring girder
Dorr, E. S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157277</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Salem Water Works</title>
<link>https://hdl.handle.net/1721.1/157276</link>
<description>The Salem Water Works
Dodge, Frank S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157276</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-Layer Nanoparticles for Cytokine Delivery</title>
<link>https://hdl.handle.net/1721.1/157260</link>
<description>Layer-by-Layer Nanoparticles for Cytokine Delivery
Pires, Ivan S.
In the past decade, cancer immunotherapy has been a promising therapeutic strategy for cancer treatment. However, immunotherapy has failed to improve responses in certain cancers such as ovarian cancer (OC). The action of cytokines in the tumor microenvironment (TME) is key to regulating immune responses, but dose-limiting toxicities limit the application of cytokines in cancer therapy. One promising approach to improve treatment with cytokines are nanoparticles (NPs) which, when modulated via layer-by-layer (LbL) assembly, can provide many of the desirable characteristics of cytokine-delivery vehicles including tumor cell targeting, subcellular localization, and improved pharmacokinetics. In this thesis, we address some aspects of NPs that have limited their clinical utility including manufacturing, control over self-assembly, and mechanistic understanding of their interactions in biological environments The focus here was on using liposomal LbL-NPs coated with a bilayer of poly-L-arginine (PLR) and poly-L-glutamate (PLE). The coating of NPs with PLR/PLE enables targeting towards cancer cell surfaces which allows for extended extracellular presentation of cargos. This ability is used for targeted delivery of a potent immunostimulant – interleukin-12 (IL-12) to disseminated tumors in metastatic OC. Aspects on the manufacturing of other lipid-based nanocarriers such as discoidal assemblies and immune stimulating complexes (ISCOMs) are also explored. We show that employing a bottom-up approach to produce lipid-based NPs from mixed micelles allows for greater control over NP self-assembly. With this procedure, we generated immune stimulating complexes (ISCOMs) co-loaded with monophosphoryl-lipid-A (MPLA) via a scalable approach for clinical-scale manufacturing of the adjuvant termed Saponin MPLA NanoParticles (SMNP). Moreover, we discover that this approach allows for precise control over liposome size from 50 nm to 1 µm with minimal polydispersity. Lastly, by exploiting the lipid headgroup charge repulsion, we find that multivalent charged lipids yield discoidal lipid nanoparticles through this approach. Unlike previous attempts to generate lipid-based discs, this new class of NPs termed charge-stabilized nanodiscs (CND) do not require disc-stabilizing agents such as proteins or polymers. CNDs are shown to be promising drug delivery vehicles, especially when coated with PLR/PLE via the LbL technique where they have greater tumor accumulation than LbL-coated liposomes. On the use of LbL-NP for cytokine delivery via PLR/PLE coated NPs, we found that covalent conjugation of IL-12 to the liposomal core of LbL-NPs greatly improves targeting and retention of IL-12 in peritoneally-disseminated OC tumors, enabling immunological and therapeutic effects not observed with free cytokine treatment. Mechanistic investigations revealed that these LbL-NPs rapidly accumulated in tumor nodules upon intraperitoneal (i.p.) administration, wherein shedding of the LbL coating allowed for gradual release of IL-12-lipid 3 conjugates via lipid extraction by serum proteins present in interstitial fluid. Upon a single dose of IL-12 conjugated to LbL-NPs using an intraperitoneally disseminated OV2944 highly-metastatic (HM-1) mouse model, we observed a dramatic increase in T cell levels within the ascites and the tumor nodules dispersed within the i.p. space which was not observed with either free cytokine or unlayered IL-12-NPs. When evaluated for its effectiveness in this highly aggressive model, two doses could significantly enhance survival compared to even five times (5x) the amount of free cytokine. Remarkably, while the model was non-responsive to checkpoint inhibitor (CPI) therapy with anti-PD1 and anti-CTLA4, when combined with LbL-IL-12-NPs, we achieved complete responses with robust immune memory induction. The mice were able to rapidly clear rechallenges with fresh cancer cells in the i.p. space. Towards the clinical translation of LbL-IL12-NPs, we demonstrate that LbL assembly is readily performed via microfluidic mixing technology amenable for clinical-scale manufacturing. We also find that we can titrate the polymer amount used to omit time-consuming purification steps. We also find that the LbL film conformation is key to maintaining therapeutic efficacy as thicker films hinder IL-12 delivery. Lastly, we uncover that the binding target of PLE on the surface of cancer cells is SLC1A5, a glutamine amino acid transporter.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157260</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inosine-containing mRNA induces an innate immune response and is translated with lower efficiency</title>
<link>https://hdl.handle.net/1721.1/157259</link>
<description>Inosine-containing mRNA induces an innate immune response and is translated with lower efficiency
Bao, Caroline
Inosine is a nucleoside formed by deamination of adenosine by adenosine deaminases acting on RNA (ADAR). ADAR editing activity is known to play a key role in modulating the host cell’s immune response to RNA. Here, we specifically study the effect of the presence of inosine in RNA by generating an inosine-containing reporter mRNA sequence. We also generated mRNA sequences that contained pseudouridine, an RNA modification known to decrease immune response to in vitro transcribed (IVT) mRNA and elevate the expression of the encoded gene, to examine the interaction between pseudouridine and inosine modifications. &#13;
While A-to-I editing activity is required for endogenous RNA to evade the innate immune response, our results show that inosine-containing IVT RNA induces an elevated immune response and is translated at a lower efficiency. This effect is dominant over pseudouridine modification, such that mRNAs containing both pseudouridine and inosine modifications still potently activate the innate immune response and exhibit a loss of translation. These results point to the potent immunostimulatory effects of inosine in transfected IVT mRNA. This elevated immune response is likely receptor-specific and we have demonstrated that it cannot be attributed to the sensors RIG-I, MDA5, TLR3, or PKR.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157259</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation of Chemical Kinetic Models including Macromolecules in Multiphase Systems</title>
<link>https://hdl.handle.net/1721.1/157258</link>
<description>Automatic Generation of Chemical Kinetic Models including Macromolecules in Multiphase Systems
Pang, Hao-Wei
Detailed chemical kinetic models are indispensable tools for unraveling the complexities of industrial and environmental chemistry systems. Many important industrial and environmental chemistries involve thousands of species and hundreds of thousands of complex pathways, which are difficult to resolve manually. To address this challenge, automatic mechanism generation software has been developed. Previous studies have demonstrated the promising quantitative agreements of automatically generated mechanisms with experimental data. However, previous studies have primarily focused on small molecules in single-phase systems, overlooking the complexities of multiphase systems and macromolecules commonly found in industrial and environmental processes. This thesis introduces advancements in three key areas of automatic mechanism generation: Part I extends the current framework of automatic mechanism generation to tackle the longstanding issue of polymer fouling in the industrial system. Two detailed kinetic models are presented: one for anaerobic fouling and the other for aerobic fouling in distillation columns. Modeling innovations are introduced, which allow one to construct models including thousands of chemical reactions occurring in the liquid and film phases, vapor-liquid equilibria of hundreds of molecules, transport between the phases, and flows between the trays. All of these factors significantly affect the fouling rate. Most of the critical model parameters are derived from quantum chemistry calculations. The modeling method is validated using experimental film growth measurements made with a quartz-crystal microbalance. These models clarify the mechanistic details of the fouling process. Part II develops machine learning models for predicting thermochemical parameters in gas and liquid phases. A decision tree model based on subgraph isomorphism for gas-phase radical thermochemistry is presented. The model demonstrates improved accuracy compared to the existing empirical model and reliable uncertainty estimates for both interpolation and extrapolation tasks. Additionally, the effectiveness of active learning for building models for solvation-free energy is explored under various compositions of initial training sets and uncertainty estimation methods for data acquisition. The possibility of aiding data acquisition with unsupervised learning for active learning is also assessed. Part III adds new features and enhances the performance of multiple packages under the Reaction Mechanism Generator software suite, originally developed by the MIT Green Group. New tools are developed to facilitate thermochemical data augmentation, multiphase 3 simulation for automatic mechanism generation, and the automatic implementation of quasisteady state assumptions during the simulation of detailed kinetic models. A new species and reaction selection algorithm is developed to enable the automatic generation of mechanisms for molecular growth systems. Various speed improvement techniques are applied to improve both the simulation speed and sensitivity analysis of large-scale detailed kinetic models. By addressing these key areas, this thesis contributes to the advancement of automatic mechanism generation, paving the way for more accurate and efficient modeling of complex chemical systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157258</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Technology Adoption Process in Accounting and Finance Using Systems Thinking Methods</title>
<link>https://hdl.handle.net/1721.1/157257</link>
<description>Improving Technology Adoption Process in Accounting and Finance Using Systems Thinking Methods
Chun, Albert Y.
In the era of digital transformation, Accounting and Finance (A&amp;F) functions face the challenge of making well-informed decisions about which technologies to adopt, which processes to prioritize, and why. These decisions require stakeholders to carefully evaluate available options, assess their implications and tradeoffs, and align diverse preferences to make well-supported investment choices. Conducting this process in a siloed and unstructured manner can lead to inefficiencies.&#13;
This study explores the application of Systems Thinking (ST) and Systems Engineering (SE) methods, developing an integrated framework that combines Rich Picture, Object-Process Diagram (OPD), Design Structure Matrix (DSM), and Multi-attribute Tradespace Exploration (MATE) to enhance the technology adoption decision-making process within A&amp;F functions. The focus is on Internal Audit (IA) as a case study for a simplified model and demonstration. While empirical data collection and hypothesis testing were not conducted due to data and time constraints, qualitative insights were gathered from industry practitioners.&#13;
Key findings suggest that the integrated framework can potentially reduce the time and effort needed to reach technology adoption decisions. Providing a structured and comprehensive approach ensures that the decision-making process is more holistic, unbiased, and quantifiable. This can also offer post-implementation benefits, as the technologies adopted align better with the organization’s requirements and preferences, resulting in improved efficiency and effectiveness.&#13;
This study extends the practical application of ST methodologies into A&amp;F. By presenting this integrated framework, it contributes to the foundation for future research on applying ST to improve the technology adoption decision-making in A&amp;F.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157257</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Red Teaming Language Conditioned Robotic Behavior</title>
<link>https://hdl.handle.net/1721.1/157255</link>
<description>Red Teaming Language Conditioned Robotic Behavior
Abhangi, Nishant
Natural language instruction following capabilities are important for robots to follow tasks specified by human commands. Hence, many language conditioned robots have been trained on a wide variety of datasets with tasks annotated by natural language instructions. However, these datasets are often limited in their size and hence the distribution and nature of the instructions given by real world users might be different from that in the datasets. This makes it unclear how these robots will perform in real world environments. Hence, a large scale evaluation with diverse instructions is needed to benchmark the performance of these robots. However, using humans to collect more annotations is prohibitively expensive. We show that recent large language models provide a scalable and inexpensive way to do such an evaluation. Moreover, there is a large performance drop in robots when evaluated on this larger set of instructions. We also show that we can use different prompts to LLMs to control properties such as diversity of the generated instructions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157255</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instrumenting Observability in a Decentralized Microservice Architecture</title>
<link>https://hdl.handle.net/1721.1/157254</link>
<description>Instrumenting Observability in a Decentralized Microservice Architecture
Liu, Helen X.
Software systems have increased in complexity over time, and with this increased complexity has come an increased need to keep these systems organized and functioning efficiently. Observability is closely attached to ensuring this correct and effective system function. Without system monitoring, it is difficult to pinpoint when errors occur and correct them at their sources. Monitoring systems also helps to understand a system from the outside by allowing developers to ask questions about the system’s state and function without needing to know the details of what comprises the system’s internal behavior. While there are existing solutions for observability frameworks, these solutions do not target microservice architectures, which are used more and more with expansive code bases, such as those likely to be employed in an industry environment. They also require extensive configuration to be fully integrated with a pre-existing system. As such, the challenge lies primarily in adapting observability solutions to a decentralized, microservice architecture found in an industry setting. The existing solutions also come with advantages and disadvantages for different situations, so they are often incomplete in addressing an entire system’s needs. The integrated system created here satisfies our system’s requirements of a consolidated observability platform while also enabling future customizations, thereby allowing problems to be identified more quickly and proactively.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157254</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From the body to the brain: Studying drug delivery and physiological interactions using MRI</title>
<link>https://hdl.handle.net/1721.1/157253</link>
<description>From the body to the brain: Studying drug delivery and physiological interactions using MRI
Dawson, Miranda
The brain is in continuous communication with the rest of the body. Nerves connect the peripheral and central nervous system, and complex vasculature networks selectively permit passage of small molecules with an exogenous origin into the brain parenchyma. Although brain-body interactions underpin a host of cognitive and physiological phenomena, they are often overlooked in studies of brain biology and mental function. We studied aspects of the interaction between brain and body using functional and molecular magnetic resonance imaging (MRI), in combination with other tools. In a first project, we examined properties of the blood-brain barrier (BBB). The BBB is a highly selective collection of endothelial cells and tight junction proteins that restrict passage of extracerebral substances from the blood vessels into the brain tissue. We disrupted and bypassed the BBB to deliver an MRI contrast agent and quantitatively assessed the resulting contrast dynamics. We discovered that individual brain regions display method-independent susceptibility to BBB disruption and washout, suggesting principles for calibrating drug delivery and understanding the propensity for chemical exchange across the BBB. We then used one of the widefield brain delivery techniques to apply a novel contrast agent for the study of the cholinergic system, a neurochemical pathway important for motor control mechanisms in both the central and peripheral nervous systems. Kinetic modeling of probe distributions revealed intrinsic localization of cholinergic enzymes. Finally, we applied related neuroimaging tools to an animal model of substance abuse, a pathology for which brain-body interactions are particularly engaged but underappreciated. We designed a study to investigate the role of the insula, a cortical mediator of peripheral physiological signals, in responses to opioid exposure. With molecular imaging approaches, we show the insula shapes drug-dependent brain phenotypes and physiological responses during substance exposure and withdrawal. In all, this work serves as a demonstration of the power of quantitative neuroimaging methods for multifaceted investigation of brain and body relationships.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157253</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering non-equilibrium mechanisms that regulatestructure and function of biomolecular condensates usingphase-field modeling</title>
<link>https://hdl.handle.net/1721.1/157252</link>
<description>Discovering non-equilibrium mechanisms that regulatestructure and function of biomolecular condensates usingphase-field modeling
Natarajan, Pradeep
Biomolecular condensates are phase-separated assemblies in living cells that form through cooperative interactions between their constituents such as proteins and RNAs. They are emerging as important organizers of biochemistry in cells and are dysregulated in disease. Understanding the physical principles that shape the form and function of these condensates is a fundamental scientific challenge, which if addressed can provide novel therapeutic avenues to improve human health. The equilibrium principles behind condensate formation are well understood. Several studies have investigated the impact of multivalency, protein sequence, protein-RNA interactions, and the role of DNA in modulating biomolecular interactions that drive phase separation. However, the living cell is inherently out of equilibrium with non-equilibrium reactions that constantly burn ATP and turn over biomolecules. My thesis investigates how the interplay between biomolecular interactions and non-equilibrium reactions that turn over biomolecules affect the structure and function of biomolecular condensates. Phase-field modeling is used for these investigations, as this approach has been historically successful in answering similar questions in other fields such as material science. Prior work shows that proteins present in biomolecular condensates associated with RNA transcription undergo complex coacervation with the RNA product. The first project in this thesis investigates the interplay between complex coacervation and spatially heterogeneous RNA synthesis on condensate morphology and dynamics using a phase-field model. This simple model exhibits a rich variety of dynamical behaviors and steady states. It also provides a unifying framework to explain diverse experimental observations related to condensate morphology and dynamics such as vacuole formation, aspherical shapes, directed motion, and splitting-fusion behaviors. The second project investigates how transcription of messenger RNA (mRNA) by transcriptional condensates is modulated by other RNAs in the vicinity, such as long non-coding RNAs (lncRNAs). Our model reveals that lncRNA transcription in the vicinity can regulate mRNA transcription by altering the protein concentration and lifetime of transcriptional condensates, 3 promoting mRNA transcription from genes expressed at a low level and inhibiting transcription from highly expressed genes. This model provides a unifying framework to reconcile conflicting observations in the literature about transcriptional regulation by lncRNAs. The final project focuses on the fibrillar center of the nucleolus, an important condensate that is involved in ribosome biogenesis. Using a phase-field model that explicitly accounts for rRNA-protein interactions in the nucleolus and the non-equilibrium reaction of rRNA transcription, we show that the coarsening of fibrillar centers is arrested leading to a preferred size. Altering this size affects rRNA export and processing from the fibrillar centers. These predictions are validated by experiments. Using a combination of experiments and theory, we uncover the non-equilibrium mechanism that controls size control of fibrillar centers and the functional consequences of this size control on rRNA processing.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157252</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular, Genetic, and Process Approaches for Improving Secreted Pharmaceutical Protein Quality in Komagataella phaffii</title>
<link>https://hdl.handle.net/1721.1/157251</link>
<description>Molecular, Genetic, and Process Approaches for Improving Secreted Pharmaceutical Protein Quality in Komagataella phaffii
Yang, Yuchen
Biopharmaceutical products constitute a significant portion of the global bioeconomy. Compared to traditional synthetic small-molecule drugs, recombinant therapeutic proteins offer advantages like enhanced specificity and reduced side effects, and there has been tremendous growth in their innovation thanks to modern DNA technologies and AI-driven algorithms. While mammalian platforms such as Chinese Hamster Ovary (CHO) cells are commonly used for their high production titer and capability for complex post-translational modifications, thier high cost of goods manufactured can greatly constrain biopharmaceutical global accessibility. The yeast Komagataella phaffii is the prime candidate for next-generation biomanufacturing for reasons including simpler host biology, reduced time to market, and better sustainability. Nevertheless, product quality, such as size/charge variants and non-human glycosylation, can be of major concern for proteins secreted from this host organism. This thesis explores three different engineering approaches aimed at improving the quality of both aglycosylated and glycosylated proteins, with a particular focus on monoclonal antibodies, the leading class of protein biopharmaceuticals by both sales and innovation. Firstly, we demonstrated significant quality improvements through molecular sequence engineering of aglycosylated monoclonal antibody backbones. By making informed, conservative mutations to two or three amino acid residues, we greatly reduced product-related variants from proteolysis and N-terminal variations. We further showed the comparability between yeast- and CHO-secreted products, providing a framework for rapid product development with this unconventional yeast. Secondly, we applied CRISPR-Cas9 gene editing technology to humanize the glycosylation pathway of K. phaffii. We achieved homogeneous G0 glycosylation on a reporter peptide by resolving a previously unreported synthetic lethality via a transcriptomics-informed approach. Key challenges for monoclonal antibody glycosylation were also identified through further comprehensive pathway engineering. Lastly, we examined the performance of glycoengineered K. phaffii strains under varied process conditions. Employing a machine learning algorithm, we improved the desired glycan abundance on a subunit vaccine candidate. The process-robustness of engineered strains suggests the potential of this host as a viable commercial biomanufacturing host.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157251</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microporous Polymer-Metal Organic Framework (MOF)&#13;
Hybrid Materials for Separations</title>
<link>https://hdl.handle.net/1721.1/157250</link>
<description>Microporous Polymer-Metal Organic Framework (MOF)&#13;
Hybrid Materials for Separations
Wu, Wan-Ni
Membrane-based separation holds significant promise for reducing the high energy consumption associated with traditional thermal-based separation processes in the chemical industry. Recent advancements in microporous materials, such as polymers of intrinsic microporosity (PIMs) and metal-organic frameworks (MOFs), have demonstrated performance improvements over conventional polymers. Mixed-matrix membranes (MMMs) have emerged as a potent strategy, combining the processability of polymers with the superior separation properties of MOFs to create high-performance membranes. Additionally, the integration of MOFs into polymers can mitigate stability issues such as plasticization, swelling, and physical aging. This thesis investigates MMMs based on PIM-1 and its derivatives, along with UiO MOFs, for gas and organic solvent-based separations. The studies focus on enhancing polymer–MOF interfacial compatibility, understanding penetrant transport, and addressing key challenges in MMM design and fabrication. A longstanding challenge with MMMs fabrication is poor polymer–MOF compatibility, leading to particle agglomeration and non-selective interfacial voids. To address this, the strategy of decorating polymers and MOFs with compatible functional groups was explored. By studying UiO-66-NH2 MOF and carboxylic acid-functionalized PIM-1 (PIM-COOH), it was demonstrated that MMMs with compatible functional groups exhibit enhanced polymer–MOF interaction and plasticization resistance. To further understand transport within these MMMs, self-diffusivities of gases were measured using pulsed-field gradient nuclear magnetic resonance and compared to macroscopic diffusivities obtained from permeation and sorption analysis. The PIM–MOF material platform was also extended to solvent-based separations.To understand solvent transport through microporous polymers, intrinsic properties of swollen polymers were obtained both experimentally and computationally, and these properties were correlated with solvent transport metrics. Finally, MMMs composed of PIM-COOH and UiO MOFs with systematically increasing pore apertures were evaluated for their solvent nanofiltration performance. Key challenges such as MOF instability and non-ideal polymer–MOF interfaces were identified. In summary, this thesis delves into the structure-property relationships of microporous materials for gas and solvent-based separations, offering insights that can guide the future design of advanced composite membranes for challenging separations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157250</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>SongGen: Framework for Controllable AI Song Generation through Interactive Songwriting and Artist Emulation</title>
<link>https://hdl.handle.net/1721.1/157249</link>
<description>SongGen: Framework for Controllable AI Song Generation through Interactive Songwriting and Artist Emulation
Arora, Ajay
We propose SongGen, an AI-based song-writing and song co-creation framework. Building upon existing AI tools like Suno.ai, SongGen features a chat interface with a trained AI songwriter assistant, emulating the traditional back-and-forth of human collaboration. The system offers enhanced capabilities for greater control over the songwriting process, including concept ideation, lyric generation and editing, real-time song generation, and granular instrumental specification. Comparative evaluations demonstrate SongGen’s superiority in key metrics such as steerability, expressiveness, personalization, and user satisfaction. We also present an extension of the SongGen framework for artist emulation and on-demand song generation. Future development aims to incorporate voice-based interaction and real-time voice conversion, enabling music artists to guide fans in creating personalized songs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157249</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hofstadter Physics and Composite Fermionic Phase in Moiré Systems</title>
<link>https://hdl.handle.net/1721.1/157248</link>
<description>Hofstadter Physics and Composite Fermionic Phase in Moiré Systems
Ding, Shuhan
This thesis explores the intricate electronic phenomena in Moiré systems, particularly focusing on twisted bilayer transition metal dichalcogenides (TMD). These systems, with their unique superlattice structures and strong electron correlations, provide fertile ground for investigating novel quantum states. A key focus is on understanding Hofstadter physics and the emergence of composite fermion phases in these materials. In this work, we first develop a continuum model to describe the low-energy electronic structure of twisted TMD bilayers, emphasizing the role of the Moiré superlattice in modifying the band structure and introducing non-trivial topological properties. We analyze the resulting Hofstadter spectrum under an external magnetic field, revealing the rich fractal pattern and the impact of valley polarization induced by the magnetic field. Building on this framework, we delve into the concept of composite fermions, particularly in the context of the fractional quantum Hall effect (FQHE). We extend Jain’s composite fermion theory and the Chern-Simons field theory to Moiré TMD systems, proposing the existence of an anomalous composite fermion liquid state at half-filling. Through a detailed mean-field analysis, we demonstrate that this state, characterized by a strong valley polarization and an effective magnetic field arising from Berry curvature, could be energetically favored under certain conditions. Our findings suggest that Moiré TMDs are promising candidates for realizing fractional Chern insulators and other exotic quantum phases, opening up new avenues for experimental exploration and potential applications in quantum technology.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157248</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Condensed Buck-Boost Switched Capacitor Converter for&#13;
Efficient Voltage Distribution in Electrified Aircraft</title>
<link>https://hdl.handle.net/1721.1/157247</link>
<description>Condensed Buck-Boost Switched Capacitor Converter for&#13;
Efficient Voltage Distribution in Electrified Aircraft
Aron, Aklilu
Switched capacitor converters are a category of power electronic converters that harness the significantly improved energy density of capacitors as opposed to that of their conventional, inductor-based counterparts to reap benefits in terms of efficiency, size, and utilization. This work presents the analysis, design, construction, and evaluation of one such converter, inspired by the flying capacitor multilevel topology and referred to as a condensed buck-boost converter. This converter is designed and built for an application as the interface between the battery voltage and DC bus on partially electrified aircraft, where the advantages of its ability to step up/down voltage in an efficient and lightweight fashion can be fully realized. In order to be implemented in hardware for the first time, this work utilize new monolithic, bidirectional GaN FETs, whose reverse voltage blocking capabilities open new possibilities for a converter design that wastes less power and occupies less board area. This converter is compared with others that perform similar functions to showcase the benefits that this topology has to offer.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157247</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of the hif-1-dependent hypoxic stress response by C. elegans</title>
<link>https://hdl.handle.net/1721.1/157246</link>
<description>Regulation of the hif-1-dependent hypoxic stress response by C. elegans
Diehl, Calista Sorine
All aerobic organisms need a way to sense oxygen levels and respond accordingly when in an unfavorable environment. In almost all metazoans, oxygen is both sensed and regulated by the HIF-1 (hypoxia inducible factor) transcription factor that is activated in periods of hypoxia and goes on to regulate hundreds of genes allowing for appropriate adaptations to hypoxia. HIF-1 activation results in changes at the cellular, tissue and whole organism levels such as increases in glycolysis, vascularization and erythropoiesis; HIF-1 is a critical factor in human development as well as progression of numerous diseases including ischemic stroke, COPD and cancer. HIF-1 is negatively regulated by the O2-dependent prolyl hydroxylase EGL-9 (known as EGLN, PHD, or HIF-PH in mammals). In normoxic conditions, EGL-9 uses ambient O2 to hydroxylate HIF-1. Hydroxylated HIF-1 is recognized by the von Hippel-Lindau (VHL-1) tumor suppressor protein, a component of an E3-ubiquitin ligase complex that targets HIF-1 for proteasomal degradation. In hypoxic conditions, EGL-9 is unable to hydroxylate HIF-1; stabilized HIF-1 enters the nucleus to regulate the expression of target genes that coordinate the hypoxia response. Increased activity of HIF-1, produced by either hypoxia or an egl-9(lf) mutation, induces the hypoxic stress response, which coordinates numerous adaptive changes in C. elegans, including retention of eggs in the uterus, decreases in locomotion and defecation rates, and increased resistance to not only hypoxia but also other stresses including oxidative stress and ER stress. By identifying suppressors of the egl-9(lf) mutant phenotype of egg retention, we have identified two independent pathways that regulate aspects of the hypoxic response in C. elegans. First, we discovered that loss of the conserved nonsense-mediated decay (NMD) pathway, an RNA surveillance mechanism that degrades aberrant mRNA transcripts with premature termination codons, suppressed the egl-9(lf)-induced changes in egg laying and defecation and caused increased hypoxia sensitivity. Other aspects of the egl-9(lf) phenotype, such as resistance to oxidative stress and changes in locomotion, were not affected by NMD-pathway mutations, indicating that NMD modulates specific aspects of the hypoxia response. Secondly, we found that loss of the neprilysin metallopeptidase, nep-2, suppressed the egl-9 Egl phenotype through the degradation of multiple neuropeptides including the known NEP-2 target SNET-1. Our findings reveal two different pathways that function downstream of egl-9 to regulate aspects of the hypoxic stress response, both providing a new pathway with which to study the neuromuscular control of egg laying using NEP-2, and critically showing the integration of the evolutionarily conserved hypoxic-stress response and nonsense-mediated decay pathways.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157246</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cellulose Nanofoams: 3D Printing and Characterization</title>
<link>https://hdl.handle.net/1721.1/157245</link>
<description>Cellulose Nanofoams: 3D Printing and Characterization
Padia, Vineet
In recent years, the advancement in cellulosic nanofoams has been considerable. Yet, their customization potential for diverse application requirements has been constrained by reproducibility challenges. Our research, therefore, focused on two primary objectives: enhancing the thermal regulation capabilities and mechanical properties of cellulose nanofibrils (CNF) nanofoams, and developing a reproducible methodology for printing customized three-dimensional (3D) structures using direct-ink-write (DIW) technology and molding.&#13;
&#13;
We developed composite nanofoams using TEMPO-modified cellulose nanofiber (TCNF). The resultant composite nanofoams showcased remarkable properties such as ultra-low thermal conductivity, low density, outstanding flexibility, and infrared shielding capabilities.&#13;
&#13;
In a bid to create robust and environmentally friendly nanofoams, we employed a crosslinking process with CaCl2. The crosslinked nanofoams were extraordinarily lightweight yet boasted superior mechanical properties, significantly amplified by the crosslinker. Remarkably, these freeze-dried T-CNF/CaCl2 nanofoams maintained their form and demonstrated admirable flexibility, even when subjected to weight exceeding thousands of times their own. Furthermore, transient characterization confirmed their excellent thermal insulation capabilities.&#13;
&#13;
In conclusion, our research has pioneered the fabrication of sustainable, high-stability cellulose nanofoams. We have significantly enhanced the thermal management capabilities and mechanical performance of these nanofoams, marking a remarkable advancement in the field. The demonstrated sustainability, biocompatibility, ultra-light weight, high porosity, and deformability of the resultant nanofoams suggest considerable potential for diverse applications, including thermal insulation, shock and vibration damping, as well as tissue engineering.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157245</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Machine Connectivity Guidelines for Production&#13;
Floor</title>
<link>https://hdl.handle.net/1721.1/157244</link>
<description>Development of Machine Connectivity Guidelines for Production&#13;
Floor
Sehnawi, Kenan Hayel
This thesis introduces and uses a standardized method for assessing machine connectivity at manufacturing facilities and develops a roadmap for an organization looking to implement connectivity at its facilities. As technology rapidly advances and Industry 4.0 takes hold of manufacturing worldwide, it is essential for manufacturing companies to utilize the latest technology to maintain a competitive advantage by optimizing operations, improving productivity, and increasing throughput. In this work, an overview of machine connectivity and its benefits are presented, and technologies and security measures used for connectivity are explored. Upon compilation of this information, a comprehensive rubric was developed with six weighted connectivity criteria, each scored from 0 (no progress) to 4 (fully complete), from which a total connectivity score can be computed. The rubric serves as a guiding tool for gauging a manufacturing facility’s level of maturity with regards to connectivity, and helps identify areas of need both within a facility and within an organization as a whole. The connectivity levels of six different manufacturing facilities were assessed using the rubric. The results were compiled to understand the development of connectivity at different facilities across the organization. The learnings from this analysis are used to develop guidelines as the organization continues its push towards full connectivity across all of its facilities. The next steps in this initiative are to: 1) utilize the developed rubric to assess connectivity at all of its manufacturing facilities, 2) identify facilities in need of the most resources in order to plan and execute connectivity, and 3) encourage collaboration between facilities to expedite the connectivity implementation process.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157244</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cycle Time Reduction for CNC Machining Workcells in High-Mix Low-Volume Manufacturing</title>
<link>https://hdl.handle.net/1721.1/157243</link>
<description>Cycle Time Reduction for CNC Machining Workcells in High-Mix Low-Volume Manufacturing
Sun, Brandon Christopher
The demand for the product under investigation exceeds the available manufacturing capacity, with the CNC milling workcell identified as the bottleneck operation. This research, conducted in an active, high-mix, low-volume production environment, focuses on evaluating and implementing improvements to CNC machining parameters to enhance the workcell's capacity. Key areas of investigation include machining speeds and feeds, depth of cut, machine settings, toolpath strategies, stepover percentages, and alternative tooling. The study specifically targeted the initial roughing operation, which uses a feed mill and is the longest milling process. Addressing the challenges of high mix and low volume, the research successfully optimized machining and CNC programming parameters, reducing total machining cycle times by 25% and resulting in a 33% increase in throughput. Additionally, the methodologies and findings from this work have provided a framework for implementing further milling process improvements outside of the roughing operation, demonstrating their applicability to similar production scenarios.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157243</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Execution of a Testing Strategy for Omnidirectional Wheels</title>
<link>https://hdl.handle.net/1721.1/157242</link>
<description>Development and Execution of a Testing Strategy for Omnidirectional Wheels
Donnellan, Michael J.
Omnidirectional wheels enable robots to achieve holonomic motion; however, this often comes at the cost of increased rolling resistance compared to traditional caster wheels. The rolling resistance in omnidirectional wheels is higher than in many other wheels due to several factors including an irregular tread shape, material compliance, and friction in the bushinglike cross rollers during lateral motion. Testing standards exist for characterizing the rolling resistance, compressive strength, and other attributes of commonly used wheels such as caster wheels. However, there are no comprehensive testing standards or research that broadly characterize the performance of omnidirectional wheels. Here, test methods are described for characterizing the load relaxation, stiffness, and rolling resistance of omnidirectional wheels, and the results from these tests are presented. Test apparatuses for static loading and rolling resistance were created. Test results were analyzed to determine important factors for determining the ultimate compressive strength in static loading and the rolling resistance coefficient of an array of omnidirectional wheels, and results indicate wheel manufacturing methods and materials are the most important factors for determining these responses.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157242</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior</title>
<link>https://hdl.handle.net/1721.1/157241</link>
<description>The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior
Lee, Eunhae
This thesis investigates the psychological factors that influence belief in AI predictions, comparing them to belief in astrology- and personality-based predictions, and examines the "personal validation effect" in the context of AI, particularly with Large Language Models (LLMs). Through two interconnected studies involving 238 participants, the first study explores how cognitive style, paranormal beliefs, AI attitudes, and personality traits impact perceptions of the validity, reliability, usefulness, and personalization of predictions from different sources. The study finds a positive correlation between belief in AI predictions and belief in astrology- and personality-based predictions, highlighting a "rational superstition" phenomenon where belief is more influenced by mental heuristics and intuition than by critical evaluation. Interestingly, cognitive style did not significantly affect belief in predictions, while paranormal beliefs, positive AI attitudes, and conscientiousness played significant roles. The second study reveals that positive predictions are perceived as significantly more valid, personalized, reliable, and useful than negative ones, emphasizing the strong influence of prediction valence on user perceptions. This underscores the need for AI systems to manage user expectations and foster balanced trust. The thesis concludes with a proposal for future research on how belief in AI predictions influences actual user behavior, exploring it through the lens of self-fulfilling prophecy. Overall, this thesis enhances understanding of human-AI interaction and provides insights for developing AI systems across various applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157241</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Emissions and Costs of Geologic Hydrogen: An Integrated Lifecycle Emissions and Techno-economic Approach</title>
<link>https://hdl.handle.net/1721.1/157240</link>
<description>Quantifying Emissions and Costs of Geologic Hydrogen: An Integrated Lifecycle Emissions and Techno-economic Approach
Blackford, Timothy
In the pursuit of sustainable energy solutions, this thesis explores the lifecycle emissions and economic feasibility of geologic hydrogen production. This research extends Brandt's 2023 study of 'prospective' lifecycle assessment (LCA), enhancing the underlying open-source LCA model used in this work and adding a preliminary techno-economic analysis (TEA). The findings demonstrate that geologic hydrogen developments should have emissions intensities that compare favourably to all other hydrogen production pathways. The value of lifetime emissions intensity for Brandt’s Baseline case is estimated at 0.40 kgCO2e/kgH2, representing an increase of ~6% over Brandt’s estimation. The study also highlights the potential for geologic hydrogen to achieve competitive levelized costs (estimated at $1.45/kg), making it a promising candidate in the hydrogen economy. It finds that to achieve the best possible emissions and economic results, proponents of geologic hydrogen developments should seek to maximise the productivity of each well. It also studies the impact of the United States regime of production tax credits for hydrogen, finding that the fivefold increase in the magnitude of credits for meeting employment conditions is generally more impactful than lowering emissions intensity. The thesis underscores the importance of continued refinement of LCA and TEA models to understand geologic hydrogen resources better and ensure they are developed appropriately.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157240</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Shipyard to Sea: A Flexible System Design Approach&#13;
to the Transition from Shipbuilding to Operations A Case Study Using the United States Coast Guard Offshore Patrol Cutter Program</title>
<link>https://hdl.handle.net/1721.1/157239</link>
<description>From Shipyard to Sea: A Flexible System Design Approach&#13;
to the Transition from Shipbuilding to Operations A Case Study Using the United States Coast Guard Offshore Patrol Cutter Program
Kime, Jeremy A.
The United States Coast Guard faces significant challenges transitioning new ships from shipbuilding to operations. Historically the low volume and irregular pace of major ship deliveries, combined with diverse homeporting factors, have resulted in anomalous post-delivery requirements. Today, a growing fleet, personnel shortages, and sweeping technological advancements are amplifying the complexity of post-delivery activities. At the same time, the Coast Guard is engaged in its largest shipbuilding effort since World War II, with seven acquisition programs scheduled to deliver 134 new ships over the next 15 years. In light of these factors the current approach, which places significant strain on crews, escalates costs, and delays operational use of the Coast Guard’s newest assets, warrants thorough examination. This thesis examines the issue through case study analyses using the Offshore Patrol Cutter (OPC) Program. The Coast Guard’s challenges are driven by three primary factors: the inherent uncertainty in ship construction, sociotechnical system dynamics associated with organizational management of pre-commissioning crews, and the ongoing evolution of technology. To address these challenges, this analysis employs an integrated approach, synthesizing principles and techniques from Architecting Innovative Enterprise Strategy (ARIES), Flexible Engineering Design (FED), and System Design and Management (SDM). This systems thinking approach aims to develop opportunities to reduce costs, improve schedules, and optimize workforce outcomes. The analysis recommends a three-phased strategy that could yield cost savings on the order of $400 million over the OPC Program’s lifespan, significantly mitigate risks associated with unforeseen shipbuilding developments, and enhance organizational outcomes regarding workforce, operational availability, and life cycle sustainment. The staffing of pre-commissioning crews is pinpointed as a pivotal discretionary event that triggers an exponential increase in system complexity and a surge in scope by introducing interdependent yet organizationally disparate requirements. Consequently, major personnel activities are decoupled from highly variable ship construction milestones. This paves the way for a paradigm shift from fixed to flexible approaches, replacing fragmented, ad hoc approaches with a flexible system architecture capable of continuous enterprise learning and improvement. Dynamic post-delivery activities are reimagined as a continuous business line, to professionalize the transition of new ships from shipbuilding to operations.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157239</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Intangible Reverberations Following Mergers &amp; Acquisitions</title>
<link>https://hdl.handle.net/1721.1/157238</link>
<description>The Intangible Reverberations Following Mergers &amp; Acquisitions
Warren, Laura N.
This study preliminarily investigates how merger and acquisition (M&amp;A) activities affect employees as stakeholders of the company system - specifically in the areas of leadership, communication, company direction, project autonomy, and path for career growth.&#13;
Interviews of 14 employees supporting the oil and gas industry were conducted to determine the effect (if any) that M&amp;A activities had on their careers and any similarities in their experiences. This data was evaluated against research completed by Steigenberger &amp; Mirc and Schweizer &amp; Patzelt.&#13;
While the hypotheses presented cannot be proven, recommendations for future research are provided to gain and evaluate additional information.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157238</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A synthetic biology platform for malaria parasites based on orthogonal transcriptional control</title>
<link>https://hdl.handle.net/1721.1/157237</link>
<description>A synthetic biology platform for malaria parasites based on orthogonal transcriptional control
Cárdenas Ramírez, Pablo
Malaria is responsible for half a million deaths each year in some of the poorest communities around the world. Furthermore, the evolution of drug resistance among malaria parasites threatens to continue this trend. However, our understanding of malaria parasite biology is held back by a lack of tools with which to study the function of their genes. In light of this, we have created systems that control gene expression in the malaria parasite Plasmodium falciparum using bacterial repressor proteins. These are the first tools to reliably control malaria parasite transcription and offer the most robust method of conditional gene expression in Plasmodium parasites to date. We develop automated DNA design software to apply this technology to study essential parasite genes for functional genomics and confirm compound-protein interactions for drug discovery. We hope these tools advance efforts to engineer and control malaria parasites in the future.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157237</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Degradation Mechanisms and Applications in Ion Intercalation Materials</title>
<link>https://hdl.handle.net/1721.1/157236</link>
<description>Degradation Mechanisms and Applications in Ion Intercalation Materials
Zhuang, Debbie
Lithium ion batteries (LiBs) are a pivotal energy storage technology that are widely adopted for their high energy density and safety. From a macroscopic level, LiBs operate at a micrometer lengthscale, but consist of many active material nanoparticles which participate in reversible electrochemical reactions that store and release energy. These particles control the crucial processes for energy storage in macroscopic devices, generating a process spanning multiple length and time scales in LiBs. However, despite the ubiquitous application of LiBs in many industries, degradation limits their lifespan, hindering their broader applicability in usages demanding high energy density and extended lifespans, such as electric vehicles (EVs). Dominant degradation occurs at the nanoparticle level involving various mechanisms, such as formation of resistive films on the particle surface or surface phase transformations in common LiB materials. The effects of degradation are observed at the macroscopic level from electrochemical responses such as voltage or current measurements. Bridging the gap between microscopic and macroscopic scales to extract particle level degradation mechanisms from electrode scale responses is essential for understanding LiB degradation. These methods can be used to quantify degradation in battery materials for second life use, designing degradation resistant materials, and more.&#13;
&#13;
Here, I propose a comprehensive multiscale framework that initially models LiB degradation at a single particle scale, using nickel rich materials as an example, then projects single particle degradation into population scale for both solid solution and phase separating materials. Furthermore, I analyze and design improved pulse diagnostics using hybrid pulse power characterization (HPPC) methods to extract physical microscopic degradation mechanisms from electrode-level responses. Overall, I set up a consistent framework modeling degradation from single particle to population level and vice versa in LiBs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157236</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperbolic String Field Theory</title>
<link>https://hdl.handle.net/1721.1/157235</link>
<description>Hyperbolic String Field Theory
Firat, Atakan Hilmi
This thesis develops string field theory whose elementary interactions are parameterized using hyperbolic geometry. We introduce a systematic procedure to characterize its off-shell data: the local coordinates around punctures on Riemann surfaces as a function of complex structure and the vertex regions in the relevant moduli spaces over which the moduli integration is performed. This procedure exploits the relation between hyperbolic geometry and the semi-classical Liouville theory. We demonstrate that the (generalized) hyperbolic three-string vertex is exactly solvable, while the higher-order vertices can be obtained via the conformal bootstrap of Liouville theory in terms of classical conformal blocks and the DOZZ formula. The four-string and tadpole vertices are constructed explicitly using the known expressions of the associated blocks. Our method suggests the existence of a hidden cubic structure within hyperbolic string field theory.&#13;
&#13;
We also take the WKB-like limit of our construction and demonstrate that it can be used to characterize Strebel quadratic differentials on Riemann surfaces. These differentials encode the geometry of polyhedral vertices of classical closed string field theory. The implication is that they can be embedded into the hyperbolic paradigm. The validity of our results in this regime is further confirmed by developing a topology-independent machine learning algorithm characterizing Strebel differentials. Such algorithm provides an alternative, numerically scalable approach for computing closed string field theory interactions. Finally, our work investigates the open-closed string field theory in the presence of large number of D-branes. We establish its consistency by solving the relevant geometric version of the Batalin-Vilkovisky master equation using hyperbolic geometry and investigate its limits.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157235</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Evaluation of Underwater Semantic SLAM</title>
<link>https://hdl.handle.net/1721.1/157234</link>
<description>Experimental Evaluation of Underwater Semantic SLAM
Song, Thomas Jeongho
Autonomy is crucial for underwater vehicles due to the challenging and inaccessible nature of underwater environments. These environments pose significant difficulties for human-operated systems because of limited visibility, high pressure, and vast areas that are costly and risky to explore manually. Implementing autonomy in underwater vehicles presents unique challenges due to the marine environment's harsh and complex nature. Underwater communication is severely limited as water absorbs and scatters most electromagnetic signals used in terrestrial communications. This necessitates the use of acoustic communication, which has a lower bandwidth and is prone to delays and signal distortion. Similarly, GPS signals do not penetrate water, complicating navigation and creating dependence on inertial and sonar sensors, which suffer from noisy measurements that are guaranteed to drift over time. The unpredictable dynamics of underwater environments, including varying currents, lighting conditions and obstacles, further complicate autonomous navigation. As such, data collection while moving through a preplanned course is the traditional mission of the Autonomous Underwater Vehicle (AUV), defining the limitation of current technology. Higher-level missions such as search, surveillance, maintenance and manipulation require greater situational awareness, decision-making and navigation abilities, facilitated by processing semantic visual information and applying it to map generation and localization. To address the limited autonomy of current AUVs and enhance their capability for complex missions, this thesis presents the development and evaluation of a real-time, monocular visual-inertial semantic Simultaneous Localization and Mapping (SLAM) system for underwater environments, implemented on the cost-effective BlueROV2 platform. The research aims to enhance AUV autonomy and enable complex underwater missions through improved navigation and semantic mapping capabilities. Key contributions include the integration of a custom-trained object detector for underwater environments, adaptation of a hybrid SLAM algorithm combining Gaussian and Non-Gaussian landmarks for underwater operation, preliminary assessment of the SLAM system's accuracy using motion capture-based ground truth measurements, and comparative evaluation of the developed semantic SLAM system against state-of-the-art alternatives in an indoor pool experiment using the BlueROV2. This work addresses the challenges of underwater navigation and semantic mapping, offering a potential solution to extend the operational capabilities and mission complexity of affordable AUV platforms.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157234</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>CAD-Based Geometry Representations for Monte Carlo Fusion Neutronics Methods and CSG vs. DAGMC Performance Tradeoffs in OpenMC</title>
<link>https://hdl.handle.net/1721.1/157233</link>
<description>CAD-Based Geometry Representations for Monte Carlo Fusion Neutronics Methods and CSG vs. DAGMC Performance Tradeoffs in OpenMC
Du, Katelin
Fusion reactors utilizing deuterium and tritium fuel produce high-energy 14.1 MeV neutrons, necessitating a thorough understanding of their behavior for effective reactor design. Neutron transport codes play a critical role in determining key parameters such as tritium breeding ratio, neutron wall loading, and heat deposition, vital for assessing operational considerations. Monte Carlo (MC) radiation transport methods have become standard in fusion neutronics due to their ability to handle energy and angular variables continuously. However, manual modeling of complex fusion geometries with traditional constructive solid geometry (CSG) methods remains labor-intensive, prompting the integration of computer-aided design (CAD) models into MC radiation transport. This thesis investigates the integration of CAD-based geometry representations into MC radiation transport, focusing on computational performance implications of the Direct Accelerated Geometry Monte Carlo (DAGMC) approach. This work examines different neutronics model representations, including CSG, Unstructured Mesh (UM), and DAGMC for the practical solutions they can provide for fusion neutronics needs. Tracking algorithms associated with each representation are explored, highlighting UM and DAGMC’s versatility in the way they integrate with CAD-based design processes. Performance comparison between CSG and DAGMC geometries in OpenMC is analyzed by evaluating particle simulation rates and memory usage across four progressively complex fusion-like models. Performance results reflect positively on DAGMC transport, but areas of future work are identified for more comprehensive results. From the lens of computational performance, this study contributes to determining the viability of CAD-based geometry representations for use in fusion-relevant MC radiation transport.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157233</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feasibility of Vector Instruction-Set Semantics Using Abstract Monads</title>
<link>https://hdl.handle.net/1721.1/157232</link>
<description>Feasibility of Vector Instruction-Set Semantics Using Abstract Monads
De Belen, Arthur Reiner
Formalizations of instruction-set semantics help establish formal proofs of correctness of both hardware designed to implement these instruction sets and the software implemented against this specification. One such prior work1 formalizes a specification of a subset of the RISC-V instruction-set architecture using a general-purpose language, Haskell, using its monad and typeclass support to abstract over effects. Another member of the same family is the RISC-V V extension, which specifies instructions for operating on multiple data elements in a single instruction, which is useful for domains with high levels of data parallelism, such as graphics rendering and machine learning. In this work I examine the question of whether the same prior work can be extended to formalize the semantics of the vector extension. I answer this question with a tentative “yes”, backed by a partial specification in Haskell of a small but nontrivial subset of this vector extension, a translation of the same specification into Coq using hs-to-coq², and work towards demonstrating the utility of this specification.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157232</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Labeling Schemes for Improving Cilksan Performance</title>
<link>https://hdl.handle.net/1721.1/157231</link>
<description>Labeling Schemes for Improving Cilksan Performance
Holla, Satya
While race detection algorithms like SP-bags have provably good theoretical properties, large overheads exist in practice, which urges the need for performance optimization. In this thesis, I propose labeling schemes as a method of circumventing many of the expensive operations in Cilksan, an implementation of the SP-bags algorithm. The proposed labeling schemes give strands of a parallel program labels during the execution of Cilksan, allowing Cilksan to shortcut the processing of certain memory accesses if the label comparison allows. I describe and prove correctness for two labeling schemes, the procedure labeling scheme and the prefix labeling scheme, implement both in Cilksan, and measure their performance. While the results show that the overhead of maintaining labels is too high in my implementation, the labeling schemes manage to circumvent many of the memory access operations, suggesting the merit of a more performant implementation of the same schemes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157231</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty-Inclusive Contrastive Learning for Leveraging&#13;
Synthetic Images</title>
<link>https://hdl.handle.net/1721.1/157230</link>
<description>Uncertainty-Inclusive Contrastive Learning for Leveraging&#13;
Synthetic Images
Cai, Fiona X.
Recent advancements in text-to-image generation models have sparked a growing interest in using synthesized training data to improve few-shot learning performance. Prevailing approaches treat all generated data as uniformly important, neglecting the fact that the quality of generated images varies across different domains, datasets, and methods of generation. Using poor-quality images can hurt learning performance. In this work, we present Uncertaininclusive Contrastive Learning (UniCon), a novel contrastive loss function that incorporates uncertainty weights for synthetic images during training. Extending the framework of supervised contrastive learning, we add a learned hyperparameter that weights the synthetic input images per class, adjusting the influence of synthetic images during the training process. We evaluate the effectiveness of UniCon-learned representations against traditional supervised contrastive learning, both with and without synthetic images. Across three different finegrained classification datasets, we find that the learned representation space generated by the UniCon loss function leads to significantly improved downstream classification performance in comparison to supervised contrastive learning baselines.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157230</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sequence-Dependent &amp; -Independent Effects of Intron-Mediated Enhancement (IME)</title>
<link>https://hdl.handle.net/1721.1/157229</link>
<description>Sequence-Dependent &amp; -Independent Effects of Intron-Mediated Enhancement (IME)
Kowal, Emma J. K.
Introns are ubiquitous features of eukaryotic genes, and their precise removal from pre-mRNA transcripts by the spliceosome is an essential step in gene expression. Genomic deletion of an intron from a gene tends to reduce its expression, and addition of an intron tends to increase it. This phenomenon, termed Intron-Mediated Enhancement (IME), has been observed in many organisms, genes, and introns. IME can act at multiple levels to increase transcription rate, processing rate, export efficiency, translational efficiency, and stability of the processed mRNA. These stimulatory effects range across orders of magnitude depending on the context, and also on the identity of the intron, as has been shown in Arabidopsis thaliana. Presently, little is known how intron sequence may determine the mode or magnitude of effect on gene expression output in animals. In this study we report the design and execution of several massively parallel reporter assays (MPRAs), interrogating the effect of tens of thousands of synthetic and natural intron sequences on gene expression in the human HEK293T and HeLa cell lines. We observe that even with random internal sequence, most of these introns splice well and trigger IME. In the primary tested context, the average intron stimulates an eight-fold increase in both mRNA and protein output over intronless controls, suggesting that the enhancement is largely at the level of mRNA accumulation. We analyze the sequence features associated with highly-enhancing introns and demonstrate that the poly-uridine (polyU) content of an intron is positively correlated with its impact on host gene mRNA and protein level. In a second library of natural intron sequences, we observe that U12-type introns do not stimulate IME, while U2-type introns universally do. Surprisingly, we observe in both MPRAs that the enhancement from random introns is similar to or greater than the enhancement from natural sequences. In sum, we have developed a robust experimental platform for interrogating the sequence-activity relationship of IME, and used it to uncover new insights into this unsung sculptor of eukaryotic gene expression.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157229</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Remote Sensing-Derived Normal Difference Vegetation Index to Predict Coastal Protection by Spartina alterniflora</title>
<link>https://hdl.handle.net/1721.1/157228</link>
<description>Analyzing Remote Sensing-Derived Normal Difference Vegetation Index to Predict Coastal Protection by Spartina alterniflora
Garber, Samantha C.
Coastal vegetation can provide protection to the coastline through its root structures, which reduce soil erosion, and its stem structures, which dissipate wave energy. The drag a plant induces could be used to quantify the amount of coastal protection that is provided. This study combined field measurements and drone surveys to develop methods for quantifying vegetation drag, focusing on Spartina alterniflora (S. alterniflora), a smooth cordgrass native to the study site: Waquoit Bay National Estuarine Research Reserve. The drag of a single plant is proportional to frontal area. The drag per bed area is proportional to the drag of a single plant and the number of stems per bed area. This study collected plant samples over the growing season to generate allometric relationships between tiller height and individual plant biomass and frontal area, which provides a way to translate remotely-sensed measures of biomass into stem count and frontal area per bed area. The frontal area was measured through digital imaging of individual plants. The elastic modulus of the stem was also measured using an Instron testing machine. For sixteen 1m x 1m test plots, Normalized Difference Vegetation Index (NDVI) extracted from drone multispectral imagery was compared to measured stem count and estimated biomass. The study compared two different years and three time points within a growing season [August 2022; June, August, October 2023]. In addition, at three plots the stem count was manually altered by cutting out 50% and 100% of the plants. This study found that while NDVI can be used to determine the abundance of S. alterniflora, there are several limitations that cause the correlations to be case-specific. Limitations to NDVI-S. alterniflora correlations included: (1) saturation, (2) species inhomogeneity of the area tested, (3) shoot density inhomogeneity of the area tested, and (4) environmental conditions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157228</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heat Pipes for the Thermal Management of High&#13;
Frequency Transformers in the Navy integrated Power&#13;
Electronics Building Block</title>
<link>https://hdl.handle.net/1721.1/157227</link>
<description>Heat Pipes for the Thermal Management of High&#13;
Frequency Transformers in the Navy integrated Power&#13;
Electronics Building Block
Hernandez, David
The development of the integrated Power Electronics Building Block (iPEBB) is key to the full electrification of future United States Navy ships. The creation of this modular, universal power converter takes full advantage of modern electronics; however, the high heat generation of these components, 9.6 kW from the MOSFET switches and 624 W from the transformer, makes thermal management crucial to their successful implementation. As a result of additional requirements, indirect liquid cooling using a detached cold plate is being studied; however, preliminary analysis revealed concerns regarding the hot spot temperatures of the transformer using this approach. This thesis explored the feasibility of using heat pipes to supplement the cooling provided by the cold plate to maintain iPEBB transformer core and coil temperatures below 100°C and 155°C respectively. First, experiments and analytical solutions were used to provide accurate estimates for the thermal conductivity values of the 3F36 ferrite and litz wire in the transformer. Then, a standalone thermal model of the transformer was built in StarCCM+ and used to test various cooling solutions, including forced airflow and heat pipe configurations. The proposed design utilized 16 copper-water heat pipes configured to provide alternative paths of heat flow for the regions of the transformer furthest from the cold plate. Shapal HiM Soft Machinable AlN ceramic was utilized to provide high voltage insulation, and electromagnetic simulations were used to estimate the induced losses in the heat pipes as a result of high frequency coil operations. Using a half-iPEBB thermal model, the final configuration, coupled with the cold plate cooled by 22°C deionized water at a flow rate of 0.37 kg/s, achieved a core maximum temperature of 99.7°C, coil maximum of 93.2°C, and MOSFET maximum of 144.6°C, all within their respective limits, while only adding a net weight of 0.29 kg to the iPEBB. The thermal results of this study showcase the effectiveness of heat pipes in the iPEBB and invite further analysis and experimentation to validate the electromagnetic implications of the concept. These results also contribute to the general ongoing study of heat pipe usage near high-frequency electronics.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157227</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brownian dynamics simulation of soft matter with hydrodynamics: methods for constrained systems and shear processing of 2D materials</title>
<link>https://hdl.handle.net/1721.1/157226</link>
<description>Brownian dynamics simulation of soft matter with hydrodynamics: methods for constrained systems and shear processing of 2D materials
Funkenbusch, William Tian
2D materials are a rising class of soft matter material with a promising set of unique characteristics. The most ubiquitous 2D material, graphene, for example, possesses large surface areas, tunability, and unique electrical, optical, and catalytic properties while being lightweight, strong, and flexible. This has led to graphene seeing use in separations, biomedical applications, flexible electronics, and more. Meanwhile, synthetic 2D polymers, a relatively new field of study, represent a massive expansion of the design space for 2D materials and their applications. Solution processing of these materials is often an important step for synthesizing or applying them, necessitating knowledge of their behavior in suspensions and in flows. As these materials become more viable, our fundamental understanding of them must increase in tandem. This will inform us in designing these materials for our desired applications. However, especially when compared to their 1D counterparts, our understanding of 2D materials is lacking. It is the goal of this thesis to help fill in this gap in knowledge.&#13;
&#13;
In Chapter 1, we discuss the basics of soft matter and methods for simulating which are the basis for understanding the work in this thesis. We present and discuss the governing equations for the movement of soft matter particles. We then discuss the simulation methodology and mobility tensor approximations used in this thesis along with some additional considerations.&#13;
&#13;
In Chapter 2 we study methods for simulating constrained Brownian systems. We compare the current state-of-the-art method for these simulations, GMRES, to a different method called the projected conjugate gradient (PrCG) method. In particular, we compared PrCG and GMRES for rigid bodies, freely jointed chains, and immobile systems. We find that both methods exhibit the same linear computational complexity. We find that PrCG, however, exhibits some notable advantages over GMRES including lower precomputational and storage burdens, a guaranteed feasible iterate, and trivial extension to new constraint types due to the lack of a preconditioner. We use PrCG to solve a mixed constraint problem with rigid body and immobile particles, comparing to the analytical solution at large separations.&#13;
&#13;
The remainder of this thesis studies the effects of self-attraction on self-avoiding, semi-flexible, athermal 2D materials (sheets) in shear flow. In Chapter 3, we give a background on rheology and 2D materials which are necessary for understanding the remaining chapters. We begin by discussing non-Newtonian fluids, specifically their applications and affect on the momentum balance presented in Chapter 1. Then, we give a brief introduction on simple shear and discuss how it is implemented in simulations. Finally, we give a brief introduction to 2D materials, their applications, as well as previous experimental, theoretical, and computational work.&#13;
&#13;
In Chapter 4, we model self-interacting, self-avoiding, semi-flexible, athermal sheets in shear flow. We find a rich conformational landscape of 4 different behaviors --- flat, tumbling, 1D folded, and 2D folded --- which are well-delineated by several dimensionless groups representing the ratios between shear strength and interaction strength, and bending rigidity and interaction strength. We use these dimensionless groups to explain the observed behaviors, explain the folding behavior of 1D folded sheets, and calculate and explain the viscosity of a dilute suspension of these sheets. We use the conformational and rotational properties of the sheet simulations to explain this behavior, demonstrating a new explanation for the non-monotonic rheological properties of 2D materials which does not involve sheet-sheet interactions (which are rare in dilute suspensions) or thermal energy (which is often small in sheet systems). We also study systems with two initially stacked sheets in order to model, for example, shear exfoliation of 2D materials. We find three behaviors --- separating, waltzing, and flipping --- which are characterized by the same dimensionless groups as single sheets. We again explain these behaviors and calculate the viscosity of these sheets which again shows interesting non-monotonic rheological properties which we also explain using the conformational and rotational properties of the sheets.&#13;
&#13;
In Chapter 5, we use simple time-dependent flow protocols to show how the properties of sheets can be controlled. Specifically, we use linear shear annealing simulations to show that the final conformational properties of a sheet suspension can be tuned continuously by varying the quench time. We also use our knowledge of the phase map of sheets to design flow protocols with step changes in shear rate to produce a target state of highly aligned, 1D folded sheets which represents, among other things, our predicted lowest possible viscosity for a sheet suspension.&#13;
&#13;
In Chapter 6, we discuss potential future directions for the sheet model applied in Chapters 4 and 5. Specifically, we discuss loose ends from Chapter 4 and potential extensions of the model. We discuss potential benefits of and complications in exploring these directions.&#13;
&#13;
Finally, in Chapter 7, we summarize the discoveries presented in this thesis and provide concluding remarks.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157226</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Lie Theory in the Verlinde Category</title>
<link>https://hdl.handle.net/1721.1/157225</link>
<description>On Lie Theory in the Verlinde Category
Kannan, Arun S.
A symmetric tensor category arises by axiomatizing the basic properties of a representation category of a finite group A famous theorem of Deligne states that, in characteristic p = 0, any symmetric tensor category of moderate growth is essentially a representation category of an affine supergroup scheme. This is not true in positive characteristic, with the most fundamental counterexamples being the Verlinde category Verₚ and its higher analogs Verₚn. It seems these categories will play a role in generalizing Deligne's theorem, and therefore, to understand symmetric tensor categories of moderate growth in general, it is important to study affine group schemes in these categories. The first part of the thesis reviews this theory.&#13;
&#13;
In the remainder of the thesis, we approach the study of Verₚ by considering two perspectives: the first perspective is that because these categories do not fiber over the category of supervector spaces, these categories provide examples of new phenomena which do not arise out of (super)algebra or (super)geometry. In particular, we explain how the Verlinde category can be used to provide new constructions of Lie superalgebras, and in particular exceptional simple Lie superalgebras in low characteristic. We also show that in characteristic 5 a new algebraic structure we call a "weak Jordan algebra" arises. Finally, we classify bilinear forms in the Verlinde category Ver₄⁺ and discuss the associated Witt semi-ring, which is a new algebraic structure.&#13;
&#13;
The second perspective is that these categories actually contain the category of supervector spaces, so they must generalize what is already known. We extend the theory of Frobenius kernels to the Verlinde category and use it to prove an analog of the Steinberg tensor product theorem for the group scheme GL(X).
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157225</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting irregular parallelism to accelerate FPGA routing</title>
<link>https://hdl.handle.net/1721.1/157224</link>
<description>Exploiting irregular parallelism to accelerate FPGA routing
Zhu, Alan Y.
In the era of hardware specialization, field-programmable gate arrays (FPGAs) provide a promising platform for computer architects, combining the programmability of software with the speed and performance of hardware. Despite this, compiling hardware programs onto FPGAs can be incredibly time-consuming, making it hard to develop and iterate on complex FPGA programs. Of particular relevance is the routing phase, which takes a circuit’s technology-mapped netlist and routes its signals using the switches and wires present on a given FPGA architecture, often with a target of minimizing critical path delay. This optimization problem is known to be NP-hard, and existing algorithms for approximating it exhibit very little regular parallelism.&#13;
This thesis accelerates the routing phase of VTR 8.0, a commonly used, open-source research tool for FPGA CAD flow. We show that despite the lack of regular parallelism, routing still exhibits significant irregular parallelism. This parallelism can be exploited on parallel architectures that provide hardware support for ordered tasks and fine-grained speculation, such as the Swarm architecture. Using Swarm, we exploit the parallelism present at the core of VTR’s algorithm, achieving a 35.9x speedup on a single routing iteration of a large benchmark (cholesky_mc) on 256 cores.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157224</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calculation of Zakat on Financial Assets for American Muslims: A Financial and Jurisprudential Approach</title>
<link>https://hdl.handle.net/1721.1/157223</link>
<description>Calculation of Zakat on Financial Assets for American Muslims: A Financial and Jurisprudential Approach
Arsalan, Naveed
This thesis presents a comprehensive framework for calculating Zakat on modern financial assets specifically tailored for American Muslims. As one of the five pillars of Islam, Zakat is an obligatory form of charity for those who meet specific wealth criteria. However, applying traditional Zakat principles to contemporary financial instruments poses significant challenges, particularly within the context of the U.S. financial system.&#13;
&#13;
The research addresses these complexities by developing methodologies that consider diverse financial instruments, valuation challenges, tax implications, accessibility issues, and Shariah compliance. The framework covers a wide range of assets, including cash and bank accounts, stocks, mutual funds, bonds, cryptocurrencies, retirement accounts (401(k)s, Traditional and Roth IRAs), Health Savings Accounts (HSAs), employee stock options, precious metals and jewelry, and real estate investments.&#13;
&#13;
Bridging classical Islamic jurisprudence with modern financial realities, this thesis provides detailed calculation methodologies for each asset class, incorporating U.S.-specific considerations such as tax-deferred accounts and capital gains implications. The framework is designed to be adaptable to evolving financial markets and balances various scholarly opinions on contentious issues. To enhance accessibility, both comprehensive and simplified calculation methods are offered, catering to users with different levels of financial literacy.&#13;
&#13;
In conclusion, this thesis makes a significant contribution to Islamic finance by offering a structured, principle-based approach to Zakat calculation that is both Shariah-compliant and applicable in the modern American financial context. It provides a valuable resource for American Muslims striving to fulfill their religious obligations amidst the complexities of the U.S. financial system and lays the groundwork for future research in Islamic finance in Western contexts.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157223</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty Quantification in Deep Learning Models of&#13;
G-Computation for Outcome Prediction under Dynamic&#13;
Treatment Regimes</title>
<link>https://hdl.handle.net/1721.1/157222</link>
<description>Uncertainty Quantification in Deep Learning Models of&#13;
G-Computation for Outcome Prediction under Dynamic&#13;
Treatment Regimes
Deng, Leon
G-Net is a neural network framework that implements g-computation, a causal inference method for making counterfactual predictions and estimating treatment effects under dynamic and time-varying treatment regimes. Two G-Net models have been successfully implemented: one that uses recurrent neural networks (RNNs) as its predictors, and one that uses transformer encoders (G-Transformer). However, one limitation of G-Net is that its counterfactual predictive density estimates do not take into account uncertainty about model parameter estimates. These uncertainty estimates are necessary for establishing confidence intervals around the effect estimation, enabling a robust assessment of whether the effects of two treatment options exhibit statistically significant differences. An important area of work is adding support for quantification of model uncertainty for conditional effect estimation. This thesis aims to add uncertainty quantification to both the RNN-based G-Net and the G-Transformer. To achieve this, we use two well-known techniques in uncertainty modeling, namely variational dropout and deep ensembling. We evaluate our methods using two simulated datasets based on mechanistic models. We demonstrate that G-Net and G-Transformer models with uncertainty quantification are better-calibrated and perform better for individual-level clinical decision making than their baseline counterparts.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157222</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>HYPERION: A HYdrogen PERmeatION Experiment to Quantify Hydrogen Transport in Fusion-Relevant Molten Salts</title>
<link>https://hdl.handle.net/1721.1/157221</link>
<description>HYPERION: A HYdrogen PERmeatION Experiment to Quantify Hydrogen Transport in Fusion-Relevant Molten Salts
Cota, Jaron F.
The measurement of hydrogen transport properties of molten salts like FLiBe is crucial for the development of advanced nuclear technologies like lithium-bearing liquid immersion breeding blankets for fusion reactors. Tritium production and the quantification of its mobility in these materials is necessary for efficient operation of these technologies. A common method of measuring these properties is with hydrogen permeation experiments. Hydrogen permeation experiments involve measuring the flux of hydrogen permeating through a substance, and from this flux transport properties like the diffusivity and solubility of hydrogen in the molten salt can be derived with various models of the experimental setup. This thesis describes the process of fabricating and assembling a HYdrogen PERmeatION (HYPERION) experiment and provides preliminary results of the functionality as well as some issues and troubleshooting of the experiment. Using the code Finite Element Simulation of Tritium In Materials (FESTIM), the experiment was also modeled. The models were used to explore the design parameter space of the experiment to determine the experiment’s effectiveness in producing the desired result of accurately calculating the hydrogen transport properties of the molten salt. Through the process of modeling, the assumptions that were normally made when performing these experiments were called into question and their validity was quantified, suggesting that the experiments that have been previously conducted might have been significantly affected by these assumptions. Using these models could eventually improve the accuracy of measured transport properties for molten salts like FLiBe and other nuclear fusion-relevant molten salts and inform the design of hydrogen permeation experiments moving forward.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157221</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Interface for Prescriptive AI Solutions&#13;
in Enterprise</title>
<link>https://hdl.handle.net/1721.1/157220</link>
<description>Natural Language Interface for Prescriptive AI Solutions&#13;
in Enterprise
Orderique, Piero
Despite advancements in causal inference and prescriptive AI, its adoption in enterprise settings remains hindered primarily due to its complexity and lack of interpretability. This work at the MIT-IBM Watson AI Lab focuses on extending upon the proof-of-concept agent, PrecAIse, by designing a domain-adaptable conversational agent equipped with a suite of causal and prescriptive tools. The objective is to make advanced, novel causal inference and prescriptive tools widely accessible through natural language interactions. The presented Natural Language User Interface (NLUI) enables users with limited expertise in machine learning and data science to harness prescriptive analytics in their decision-making processes without requiring intensive compute. We present an agent capable of function calling, maintaining faithful, interactive, and dynamic conversations, and supporting new domains.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157220</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geo-UNet: A Geometrically Constrained Neural Framework for Clinical-Grade Lumen Segmentation in Intravascular Ultrasound</title>
<link>https://hdl.handle.net/1721.1/157219</link>
<description>Geo-UNet: A Geometrically Constrained Neural Framework for Clinical-Grade Lumen Segmentation in Intravascular Ultrasound
Chen, Yiming
Precisely estimating lumen boundaries in intravascular ultrasound (IVUS) is needed for sizing interventional stents to treat deep vein thrombosis (DVT). Unfortunately, current segmentation networks like the UNet lack the precision required for clinical adoption in IVUS workflows. This arises due to the difficulty of automatically learning accurate lumen contour from limited training data while accounting for the radial geometry of IVUS imaging. We propose the Geo-UNet framework to address these issues via a design informed by the geometry of the lumen contour segmentation task, building anatomical constraints directly into the architecture. We first convert the input data and segmentation targets from Cartesian to polar coordinates. Starting from a convUNet feature extractor, we propose a two-task setup, one for conventional pixel-wise labeling and the other for single boundary lumen-contour localization. We directly combine the two predictions by passing the predicted lumen contour through a new activation (named CDFeLU) to filter out spurious pixel-wise predictions. Our unified loss function carefully balances area-based, distance-based, and contour-based penalties to provide near clinical-grade generalization in unseen patient data. We also introduce a lightweight, inference-time technique to enhance segmentation smoothness. The efficacy of our framework on a venous IVUS dataset is shown against state-of-the-art models. We will make the code repository for this project available soon after approval from industry collaborators.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157219</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of Machine Learning-Based Methods for Narrowband Blind Adaptive Beamforming</title>
<link>https://hdl.handle.net/1721.1/157218</link>
<description>Comparison of Machine Learning-Based Methods for Narrowband Blind Adaptive Beamforming
Shonkwiler, Lara
There are many different approaches to beamforming and interferer cancellation. The earliest methods of beamforming assumed prior knowledge of the receive array geometry and of the incoming signal directions. This information is normally found via array calibration. Blind source separation methods do not require this information and therefore are more robust to array calibration errors. Traditional blind source separation methods generally leverage some intrinsic characteristic of the signal, such as constant envelope properties or second or higher order statistics. Traditional blind source separation methods such as CMA, SOBI, JADE, and FastICA tend to be highly effective at beamforming datasets with moderate to large sample supports, but they do not perform well when they only have access to a limited number of data samples. They also bear the disadvantage that the appropriate algorithm must be selected based on the properties of the expected signal. Machine learningbased methods are of interest because they show promise in low sample support regimes, and because they offer the possibility of a ‘one size fits all’ solution that can adaptively recognize and exploit different signal features. This thesis describes the performance of two machine learning-informed beamforming methods — Classification-Based Transfer Learning (CBTL) [1] and Denoising-Based Transfer Learning (DBTL). CBTL and DBTL are evaluated with respect to each other and with respect to traditional blind beamforming methods across a variety of signal detection environments, and are found to offer superior or equivalent performance in a majority of environments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157218</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Speech Motor Pattern in Minimally&#13;
Verbal Adults with Autism Spectrum Disorder via&#13;
Surface Electromyography</title>
<link>https://hdl.handle.net/1721.1/157217</link>
<description>Characterizing Speech Motor Pattern in Minimally&#13;
Verbal Adults with Autism Spectrum Disorder via&#13;
Surface Electromyography
Protyasha, Nishat Fahmida
Minimally verbal adults with Autism Spectrum Disorder (mvASD) experience significant speech production challenges linked to impaired motor skills. Despite the prevalence of these speech difficulties, the underlying motor mechanisms remain poorly understood. This thesis investigates the neuromuscular activity associated with speech motor movement in mvASD using surface electromyography (sEMG). By capturing and analyzing sEMG signals with 8 electrodes from key facial muscles during speech production tasks, this study provides insights into the distinct motor patterns exhibited by mvASD individuals compared to neurotypical controls. The sEMG data was collected while 25 participants, including 10 mvASD individuals and 15 neurotypical controls performed a series of carefully designed speech tasks. Features such as Root Mean Square (RMS) values, Pearson correlation coefficients, and eigenvalues from auto and cross correlation matrices were extracted to measure muscle activation and coordination complexity. The results reveal that mvASD individuals exhibit higher RMS values and greater synchronization between sEMG channels, indicating stronger muscle activation and tighter coupling among facial muscles. Furthermore, the analysis of eigenvalues suggests lower complexity in motor coordination among mvASD participants, reflecting fewer degrees of freedom in muscle control. These findings were supported by classification models, which demonstrated that features from diadochokinetic tasks were more effective in distinguishing mvASD from neurotypical individuals.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157217</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biometric and Biomechanical Sensing for Violin Performance Analysis</title>
<link>https://hdl.handle.net/1721.1/157216</link>
<description>Biometric and Biomechanical Sensing for Violin Performance Analysis
Kydd, Aria
Expressive violin performance demands the coordination of multiple physical and physiological processes. Students, especially those engaged in infrequent private lessons, often struggle to manage these demands. Outside of lessons, they lack access to the resources and external feedback that technology has made readily available in other learning settings. In this study, we propose the Expressive Violin Performance Sensing (EVPS) system as a solution to this issue. The EVPS system uses low-cost and accessible electronic sensors to provide objective, quantitative insights into the physical and physiological aspects of a violinist’s performance. Results from experimental trials reveal that the EVPS system provides relatively reliable data on expressive violin performance. While the general measures of physicality did not reveal significant differences between players of distinct skill levels, physiological and specific physical measurements aligned well with predictions. The successful utilization of low-cost sensors in the EVPS system highlights their potential for use in future performance analysis studies, challenging the precedent of relying on expensive, medical-grade systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157216</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regulation of microRNA degradation in Caenorhabditis elegans via the E3 ubiquitin ligase EBAX-1</title>
<link>https://hdl.handle.net/1721.1/157215</link>
<description>Regulation of microRNA degradation in Caenorhabditis elegans via the E3 ubiquitin ligase EBAX-1
Stubna, Michael William
microRNAs (miRNAs) are short, ~22-nucleotide noncoding RNAs that base-pair to messenger RNAs (mRNAs) to direct their post-transcriptional repression through their associated Argonaute (AGO) proteins. Animal genomes encode hundreds of miRNAs that, together, regulate a majority of mRNAs and tune spatiotemporal gene expression programs. The production and degradation of many miRNAs occurs in a regulated manner, but molecular pathways of miRNA degradation are relatively poorly understood.&#13;
Some rapidly degraded miRNAs owe their instability to a mechanism termed target-directed miRNA degradation (TDMD), whereby unusual miRNA binding sites with extensive complementarity to the miRNA promote a conformational shift in AGO, leading to the recruitment of an E3 ubiquitin ligase complex containing the substrate receptor ZSWIM8. The subsequent polyubiquitination and proteolysis of AGO liberates the miRNA, rendering it vulnerable to nucleases. TDMD underlies the instability of many miRNAs in diverse cell lines and animals.&#13;
In this work, I probe the biological scope of TDMD as a regulatory mechanism in the nematode Caenorhabiditis elegans, which tolerates homozygous loss of the ZSWIM8 ortholog, EBAX-1, and expresses some miRNAs that are subject to rapid, developmentally regulated decay. I have confidently identified at least 22 miRNAs destabilized by EBAX-1 across the worm life cycle. These included the embryonic miR-35–42 family as well as certain stress-responsive miRNAs that together constitute some of the shortest-lived miRNAs in this organism. In mutants of ebax-1, the accumulated miR-35–42 excessively repressed predicted target mRNAs and underwent 3′ trimming as they aged, though no consistent signature of 3′ trimming or tailing emerged for EBAX-1-sensitive miRNAs.&#13;
A recent study reports that the destabilization of miR-35 at the embryo-to-L1 transition does not depend on that miRNA’s 3′ region, unlike canonical mammalian TDMD. To test the generality of this result for other EBAX-1 sensitive miRNAs, I assayed the behavior of seed- or 3′-based miR-43 variants in the presence and absence of EBAX-1. Intriguingly, the miR-43 3′ variants showed substantially reduced propensity to be regulated by EBAX-1. The requirement for 3′ pairing therefore varies between EBAX-1 sensitive miRNAs, raising questions about the molecular features of TDMD trigger RNAs that recruit EBAX-1 when extensive pairing is not crucial.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157215</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensitivity Analysis of Self-Loosening Behavior forMesoscale Bolt Assemblies Under Cyclic Lateral Loading</title>
<link>https://hdl.handle.net/1721.1/157214</link>
<description>Sensitivity Analysis of Self-Loosening Behavior forMesoscale Bolt Assemblies Under Cyclic Lateral Loading
Martinez, Alejandro
This study aims to enhance the understanding of self-loosening in mesoscale bolt assemblies, specifically those with characteristic dimensions ranging from 100 to 3,000 micrometers. These bolts pose unique design challenges due to the small difference between their nominal dimensions and manufacturing tolerances. This work discusses the design of new instrumentation to test multimesoscale bolt assemblies under various loading conditions, an area previously focused only on larger bolts. A case study was conducted in collaboration with a mesoscale multi-bolt system that was experiencing self-loosening failures. This system was tested to determine its susceptibility to the self-loosening failure mode. An experimental study was conducted to identify the sensitivities of the system to geometric and loading environment parameters. A set of hypotheses were proposed as a way to facilitate new learnings about the system’s sensitivities to four different parameters. The findings from the experimental study provide valuable insights into how different geometric configurations and types of loading conditions contribute to the performance of mesoscale multi-bolted systems. Through these investigative efforts, the study successfully identified the existence of a critical displacement threshold for self-loosening in mesoscale multi-bolted systems that is sensitive to factors such as clamp length, amplitude of input displacement load, and screw position.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157214</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Development of an Accelerated Material Synthesis&#13;
Platform for Automated Materials Research</title>
<link>https://hdl.handle.net/1721.1/157213</link>
<description>Design and Development of an Accelerated Material Synthesis&#13;
Platform for Automated Materials Research
Aissi, Eunice I.
Materials development is the foundation for innovation in many industries and fields, however, this process is traditionally slow and resource-intensive. Most often, new materials are developed and characterized on the time scale of years which can limit the pace of scientific and industry innovation. I address the material synthesis and characterization bottleneck by presenting a framework that I believe is suitable for smaller labs: Self-built, low-cost automation. The design philosophy is to de-risk the lab automation process by keeping costs low, failing fast, and leveraging common resources in electronic systems and additive manufacturing. I present an improved version of a low-cost but high-throughput inkjet material printer developed by Siemenn et al. and adapted to operation in the glovebox, hood, and benchtop environments. The tool is capable of depositing gradients of droplets with unique compositions at a rate of up to 1000 materials per minute, is self-built, and costs around $500. I also present a computer-vision-enabled high-throughput material characterization algorithm for stability quantification through color degradation. The synthesis and characterization methods are validated on a methylammonium lead iodide (MAPbI3) and formamidinium lead iodide (FAPbI3) perovskite material system. X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and hyperspectral imaging measurements show equivalence between high-throughput synthesis and more traditional spin-coating methods. Results obtained through the high-throughput stability characterization method are aligned with stability trends reported in the literature and have an accuracy of 96.9% when compared to ground-truth degradation as measured by a domain expert.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157213</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>FrED Manufacturing - A Study in Affordable Manufacturing to Scale using Desktop Sized Fiber Extrusion Device</title>
<link>https://hdl.handle.net/1721.1/157212</link>
<description>FrED Manufacturing - A Study in Affordable Manufacturing to Scale using Desktop Sized Fiber Extrusion Device
Rosko, Rachael S.
FrED (Fiber Extrusion Device) Factory is a manufacturing facility at MIT which educates its students on fundamental and advanced manufacturing principals. The factory produces multiple FrED devices, which are "desktop fiber extrusion systems that mimic continuous fiber draw process for hands-on learning and/or laboratory experience on data acquisition, control system, and smart manufacturing. It allows learners to perform experiments, vary manufacturing parameters and control system, collect data, and perform analysis." [1] This year’s thesis work builds off of the progress from 2023, which aimed to produce a low cost variant of earlier versions of the FrED. In 2024, the aim for the lab was to implement design refinements, design for manufacturing, design the assembly line, design packaging, develop supply chain using Tulip, develop educational content, perform user testing, and execute pilot runs. The focus of this thesis will be on design refinements related to graphical user interface (GUI), inclusion of threading for improvement to program speed, and characterization of performance related to diameter control as well as advancements in educational content development, user testing, production level assembly, and pilot runs. The results of this thesis include significant improvements made to the FrED device such as a user-controlled GUI as well as close-loop controls. Furthermore, key components of the device were quantified such as fps rate of the USB camera and motor stability which aided in understanding how diameter control and modulation can be implemented in future work. At the time of submission, there were inherent complications still not understood about the FrED that limited its potential as an end user product. Some complications included reliability of the diameter reading from the USB camera, physics of the hot glue preform, and motor speed assumptions which did not perform well under close-loop testing (spool speed going to 0 in order to make the diameter larger consequently prevents the camera from reading any future diameter measurements which is problematic). In terms of pilot runs, user testing, and educational content development, the results were promising. 78.3% of the 23 user testing respondents at Venture Cafe said they were interested in receiving a FrED and getting access to more learning content. Suggestions were made by the users for future work and implementation. Educational content was developed for mass flow and data acquisition, however, a formal pilot run session where this could be tested for feedback was not performed.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157212</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon Capture Efficiency in Natural Gas Combined Cycle Power Plants: Analyzing the Effects of Variable Load Operations</title>
<link>https://hdl.handle.net/1721.1/157211</link>
<description>Carbon Capture Efficiency in Natural Gas Combined Cycle Power Plants: Analyzing the Effects of Variable Load Operations
Knight, Caleb M.
Natural gas power generation retrofitted with carbon capture technology is poised to play a crucial role in ensuring energy reliability amidst the transition to variable renewable energy resources. While natural gas generation is used primarily for baseload power, it is expected to transition towards an intermittent power generator, serving as a load-following resource during periods of low renewable energy availability. It will be critical to understand how start-up, shutdown, and load-following behavior may impact system performance and influence future grid design. &#13;
&#13;
This thesis performs a comprehensive literature review to establish context on various techniques of carbon capture technology. Post-combustion carbon capture, specifically absorption-based technology, remains the preferred candidate for retrofitting natural gas plants due to its technical maturity, scalability, relatively high capture efficiencies, and ease of retrofitting. The literature highlights that absorption-based carbon capture units exhibit degraded performance during non-steady-state operating conditions. Specifically, cold start-ups result in lower capture efficiencies and higher heat rates, although hot start-ups incur significantly less performance reduction. &#13;
&#13;
The literature review findings are integrated into GenX, a grid optimization tool, to evaluate natural gas combined cycle power plants equipped with carbon capture technology. The modified optimization models are run using the ISO New England grid system, and results suggest that incorporating advanced start-up penalties for natural gas plants reduces operational flexibility in an emissions-constrained environment. As capture efficiencies decrease and heat rates increase during start-ups, utilizing natural gas plants becomes more expensive due to the additional emissions and reduced thermal efficiency. Comparing models with different levels of performance degradation during start-up suggests that installing less gas capacity could be optimal, with those units operating at higher capacity factors to mitigate start-up penalties. Under modest emissions constraints, natural gas units may be operated continuously even during periods of renewable energy surplus. Harsher start-up penalties applied to natural gas plants likely increase the incremental value of alternative energy technologies, although natural gas retains a critical role in the energy mix.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157211</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cheaper Than A Funeral: Considering Ibogaine’s Psychedelic Journey and Therapeutic Potential</title>
<link>https://hdl.handle.net/1721.1/157210</link>
<description>Cheaper Than A Funeral: Considering Ibogaine’s Psychedelic Journey and Therapeutic Potential
Daly, Noah
The past decade has seen a surge of interest in psychedelic compounds as therapeutic medicine. Ibogaine, an indole alkaloid extracted exclusively from an endangered family of shrubs from Central African nations of Gabon and Cameroon, is a psychedelic currently being studied for its unique therapeutic potential. It is also considered the most extreme of the psychedelic drugs currently known to researchers. For the past fifty years, it’s been used to treat severe substance use disorders, particularly with highly addictive opioids and stimulants. In the past ten years, American special operations forces veterans have begun to take ibogaine to treat traumatic brain injuries (TBI). Anecdotal evidence has suggested that the permanent, downstream symptoms TBI patients experience after these injuries are effectively managed after a single ibogaine treatment. Advocacy from the special operations veterans community prompted Stanford University researchers to embark on the first-ever U.S.-based clinical trial of ibogaine to treat TBI. The study, published in January, 2024, further evidenced decades of evidence of ibogaine’s clinical use potential. Yet questions still remain about whether or not ibogaine’s cardiac toxicity can effectively be managed in human patients, as well as the true therapeutic utility of the prolonged period of dreamlike consciousness ibogaine produces in patients. This thesis examines the cases of three patients–all United States military veterans–undergoing ibogaine therapy, examining how the biological impacts of ibogaine, as well as their psychedelic experiences, may have saved their lives.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157210</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The rickettsial effector Sca4 has a conserved interaction with host clathrin and a tick cell specific role in infection</title>
<link>https://hdl.handle.net/1721.1/157209</link>
<description>The rickettsial effector Sca4 has a conserved interaction with host clathrin and a tick cell specific role in infection
Vondrak, Cassandra Joan
Intracellular bacterial pathogens secrete effectors to manipulate the host cell environment, create a hospitable niche, and promote infection. While many effectors interact with specific host machinery to perform a single distinct function, some effectors are capable of interaction with multiple host proteins to carry out multiple functions. Rickettsia species are obligate intracellular bacteria that cause vector-borne diseases that constitute an ongoing public health threat. As Rickettsia spp. have small genomes, and thus a limited coding capacity, multifunctional effectors may be an efficient way to manipulate their host environment. However, relatively few secreted effectors have been characterized in the Rickettsia genus and even fewer have been identified as multifunctional effectors. &#13;
&#13;
In this work, I demonstrate that the rickettsial secreted effector Sca4 interacts with the host endocytic factor clathrin heavy chain. As previous work showed that Sca4 interacts with the host protein vinculin in mammalian cells, this discovery of the Sca4-clathrin interaction makes Sca4 one of the first multifunctional effectors to be identified in a Rickettsia species. When investigating the potential role of the Sca4-clathrin interaction, I found that clathrin promotes the cell-to-cell spread of R. parkeri in mammalian cells by acting in the recipient cell. However, the Sca4-clathrin interaction was found to be dispensable for efficient cell-to-cell spread. I investigated the role of this interaction in the tick arthropod vector and found that the Sca4-clathrin interaction is necessary for the efficient proliferation of R. parkeri in tick cells. These findings show that knowledge of the complete roles of rickettsial secreted effectors in both arthropod vector and mammalian hosts is needed to fully understand rickettsial pathogenesis.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157209</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems Thinking Approach to Hispanic Engineer’s Involvement in Corporate Diversity Networks</title>
<link>https://hdl.handle.net/1721.1/157208</link>
<description>A Systems Thinking Approach to Hispanic Engineer’s Involvement in Corporate Diversity Networks
Chambe, Enoch
Affinity networks, also known as Employee Resource Groups (ERGs), are increasingly essential in today’s corporate world as they play a crucial role in fostering diversity, equity, and inclusion within organizations. These groups provide a platform for employees from underrepresented or marginalized communities to connect, share experiences, and find&#13;
support. ERGs geared towards Hispanic employees are often advertised as not only a means to connect with others and provide a sense of belonging but are also often promoted as avenues towards successful professional development and growth for underrepresented employees. This research explores the perspectives of a group of experienced engineers from various technical backgrounds and industries to understand if there is a correlation between generational status for Hispanic Americans and their overall perceived benefits from participating in ERGs. The study provides a detailed literature review of relevant existing research on this subject, followed by semi-structured interviews of ten participants, and a thematic analysis approach used to analyze the data into the following five themes: diversity considerations for school and job selections, employee perspective on ERGs, sense of belonging and generational differences, the meaning of inclusiveness, and continued participation. Finally, a research conclusion and a series of recommendations are provided.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157208</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Women Nobel Laureates in STEM (2000-2023): Life Stories, Challenges, and How They Achieved Impact for Success</title>
<link>https://hdl.handle.net/1721.1/157207</link>
<description>Women Nobel Laureates in STEM (2000-2023): Life Stories, Challenges, and How They Achieved Impact for Success
Wu, Kedi
Science, Technology, Engineering, and Math (STEM) are the critical growth engines that develop the economy and society and improve our lives overall. However, women are underrepresented in STEM, which means 50% of the world's brain power is untapped. We know that, in general, women face unique barriers and challenges than men, such as gender bias and stereotypes. However, we know less about the unique obstacles and challenges women face in STEM and even less about overcoming the barriers in STEM. This research aims to identify the challenges faced by women in STEM and to gain a practical understanding of what women can do to evolve as leaders. As STEM is extremely broad, this thesis focused on studying the 11 female Nobel laureates who won the prize after 2000 under the three STEM-related Nobel categories: physics, chemistry, and medicine or physiology.&#13;
&#13;
First, a comprehensive literature review was conducted to understand the study results of existing barriers faced by women in STEM and the enablers that can increase the likelihood of women's success in STEM. Next, data were collected about the 11 Women STEM Noble laureates, including their biographies, life stories, newspaper reports, and interview transcripts. The thematic analysis was then adopted to analyze the collected data, in which four themes are identified and presented: 1) Overcome Barriers and Challenges; 2) Qualities of a Good Scientist; 3) Supportive Systems; 4) Impactful, Humanity, Innovative. Finally, the findings are summarized in relation to the research objectives to provide insights for women who want to pursue a STEM career.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157207</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms and Implementation of Thermo-Optical Annealing in Silica Fiber Sensors for Radiation-Induced Attenuation Mitigation</title>
<link>https://hdl.handle.net/1721.1/157206</link>
<description>Mechanisms and Implementation of Thermo-Optical Annealing in Silica Fiber Sensors for Radiation-Induced Attenuation Mitigation
Legoupil, Aurelien Y. M.
In the context of quench detection systems for fusion superconducting magnets, temperature sensors based on optical fibers provide an effective solution for rapid, distributed measurement, with low sensitivity to electromagnetic interference. At the cryogenic temperatures and high radiation doses associated with this application, however, optical fibers undergo radiation-induced attenuation (RIA): light-absorbing point defects form within the silica glass structure, reducing the longevity and effectiveness of these sensors. In this work, we investigate the underlying microscopic defects and mechanisms of RIA and assess strategies for mitigation, namely, annealing via heat treatment (thermal annealing) and annealing via light propagation through the fiber (optical annealing, or “photobleaching”). We design a white light absorption spectroscopy setup with in-situ irradiation and optical annealing, working at liquid nitrogen temperature and different post-irradiation warm-up rates. For the pure silica core and F-doped cladding fibers studied, the RIA spectrum obtained is decomposed into known radiation-induced defect absorption bands, highlighting the key role of self-trapped holes in RIA at telecommunication wavelengths. Furthermore, absorption spectroscopy experiments are performed to show that thermal annealing at liquid nitrogen temperature is negligible, validating the transferability of the experimental results obtained at 77 K to 20 K applications. The decomposition of RIA into different defect contributions is supported by cold post-irradiation electron paramagnetic resonance (EPR) spectroscopy of fiber preform fragments, which reveals the presence of two types of paramagnetic centers: self-trapped holes and E'_gamma centers. The post-irradiation transient grating spectroscopy (TGS) technique is adapted to glass samples with continuous cooling at liquid nitrogen temperature and in-situ optical annealing. With this technique, we could observe the changes in thermal and acoustic properties resulting from the evolution of defect populations, with the potential to complement other experimental techniques to better understand RIA build-up and annealing kinetics. To improve the modeling of thermo-optical annealing, we propose future experiments including isothermal annealing tests and a larger exploration of optical annealing parameters. Our RIA build-up and annealing tests can help companies aiming to operate optical fibers under irradiation at cryogenic temperatures optimize their heat treatments to restore fiber transmission and the prevention of RIA during operation.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157206</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Energy and Area Estimation Plugin for Accelerator Architecture Simulation</title>
<link>https://hdl.handle.net/1721.1/157205</link>
<description>An Energy and Area Estimation Plugin for Accelerator Architecture Simulation
Wu, Wendy
Development of domain-specific hardware accelerators has been an important focus for high performance computing research in recent years, enabling significant gains in a variety of practical applications. Of particular interest is accelerator design for applications involving sparse data. Such accelerators inherently tend towards a diverse array of architecture designs, and often rely on custom simulators for evaluation. In addition to raw performance, energy consumption and chip area are both important considerations for evaluating accelerators. Accelergy is a tool that provides a good general framework for fine-grained energy and area estimation. However, output from simulation tools may not be compatible with Accelergy’s expected input format, which is the case for the custom simulator Accelsim. To address this gap, this work presents a streamlined plugin for processing Accelsim simulator output into Accelergy input, for the purpose of generating accurate and explainable energy consumption and area models for accelerator architectures. We demonstrate the plugin’s flexibility by performing energy and area estimates for two state-of-the-art hardware accelerators, ISOSceles and Trapezoid. Overall, this plugin is easy-to-use, self-contained, and supports a wide variety of configurable functionalities, making it an excellent general tool for running Accelergy on Accelsim output.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157205</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovery of Herschel-Bulkley Fluid Parameters from&#13;
Video via Differentiable Simulations</title>
<link>https://hdl.handle.net/1721.1/157204</link>
<description>Recovery of Herschel-Bulkley Fluid Parameters from&#13;
Video via Differentiable Simulations
Eastman, John M.
Recreating the physical behavior of fluids from real-world footage remains a significant challenge, particularly for non-Newtonian fluids. This work introduces a novel method that combines neural radiance fields (NeRF), which map 3D scene coordinates to color and density using deep neural networks, with the material point method (MPM), a simulation technique that represents materials as moving points capable of large deformation. Our approach aims to accurately recover physical parameters and achieve high-fidelity 3D reconstructions from single-view videos of fluids, even those with complex rheological behaviors like shear thinning and thickening. In this study, we apply our method to a Herschel-Bulkley fluid, namely ketchup, under two different real-world conditions: a 50mm column collapse and being squeezed from a bottle. By leveraging the differentiable nature of NeRF and the fluid simulation capabilities of MPM, our approach extracts parameters from real-world footage after initially training on approximate geometry derived from virtual models. The actual video footage is then used to estimate initial velocities and retrieve constitutive parameters, including modulus, yield stress, and viscosity. The iterative optimization process, which integrates continuous feedback between the NeRF-MPM simulation and the video data, enables us to extract constitutive parameters from real footage and perform predictive simulations that closely reflect the behavior observed in the training videos. Key results include the retrieval of constitutive parameters, such as modulus, yield stress, and viscosity, as well as reconstructed videos that reflect the fluid behavior observed in the training video. The results demonstrate that our method can reconstruct the fluid’s flow behavior from limited perspectives, accurately enough to visually reproduce the flow, showcasing its flexibility and robustness. This work not only validates the approach through 3 a series of experiments but also highlights the potential for differentiable rendering and simulation techniques to advance our understanding and simulation of complex material dynamics, particularly in cases where direct measurements are challenging or impossible.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157204</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Adaptive Parsing to Integrate Dialogue Scripts in Game&#13;
Development</title>
<link>https://hdl.handle.net/1721.1/157203</link>
<description>Using Adaptive Parsing to Integrate Dialogue Scripts in Game&#13;
Development
Taylor, Temi
For people without programming experience, integrating their work into the main project forms a common bottleneck in video game development. Particularly for dialogue writing, existing approaches for moving the text into the codebase are either highly tedious or excessively heavyweight for faster paced projects. Given that writers often initially produce loosely-formatted scripts, this thesis describes Game-DAP, an adaptive parsing system that accounts for the variation in individual dialogue writing styles. Examinations of pre-existing systems and a survey conducted on developers form a basis for a syntactic model of the information commonly encapsulated by dialogue scripts. This model lends itself to a design for the parsing process used by Game-DAP, which aims to provide as much flexibility to writers as possible with those assumptions as a baseline. User testing results informed the evaluation of the system, focusing on its accuracy, flexibility, and accessibility from the perspective of various authors. Although this analysis revealed several classes of inputs that Game-DAP struggles to process with full correctness, the more successful cases and instances of positive feedback suggest that a refined approach to this kind of domain-specific parsing could provide great value in the creative writing process of game dialogue.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157203</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Optoelectronic Properties of Twisted and&#13;
Intercalated Niobium Oxide Dihalides</title>
<link>https://hdl.handle.net/1721.1/157202</link>
<description>Exploring Optoelectronic Properties of Twisted and&#13;
Intercalated Niobium Oxide Dihalides
Luo, Ashley
2D materials, or layers of one-atom thick crystalline solids, offer a flexible solution for a variety of applications that require certain characteristics. As a result of modifications in physical and chemical design involving 2D materials such as stacking, twisting and ion intercalation, properties such as electrical conductivity, spin diffusion length, thermal conductivity, and mechanical strength observe more degrees of freedom than in their bulk material counterpart. Currently, small optical systems comprise of passive devices that are rigid in their light pathing design and require modulators to control light post-fabrication for use. These systems are confined by the material used to fabricate the device and their associated effective indices, which are determined pre-fabrication by the ultimate desired optical effect. However, 2D materials can exhibit tunable band structures that yield the optimal optical response, even post-fabrication. This thesis will discuss the properties of mechanically and chemically manipulated niobium oxydichloride (NbOCl₂) and niobium oxydiiodide (NbOI₂) ultrathin structures that have the potential to integrate into flexible optical systems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157202</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Elastic Resistive Force Theory &amp; Applications to Uprooting</title>
<link>https://hdl.handle.net/1721.1/157201</link>
<description>Development of Elastic Resistive Force Theory &amp; Applications to Uprooting
Yilmaz, Lale
Granular intrusion processes such as sand locomotion, uprooting, and digging are commonly present. While these phenomena can be accurately modeled via discrete element methods and continuum models, this accuracy comes at a great computational cost, especially for large systems. Granular Resistive Force Theory (RFT) is a reduced-order, rateindependent model that has been shown to successfully capture the motion of rigid intruders in granular media, with a reduced computational cost. RFT is based on a rate-independent theory that calculates the force experienced by a body using its direction of velocity. This makes it difficult to handle scenarios that are near-stagnant which occur frequently in uprooting of plants. To overcome this limitation, we introduce elastic RFT (eRFT) which is based on a rate-independent plasticity flow-rule–like criterion, and pair it with deformable intruders. We focus on modeling uprooting processes which inherently have flexible intruders and are often dynamically controlled. This allows us to address both previously mentioned shortcomings of RFT (stagnancy and flexible intruders) at once. By combining eRFT with a nonlinear beam theory to represent slender, inextensible roots we create a speedy computational tool. Using MATLAB, we simulate various uprooting scenarios to better understand anchoring mechanisms of different root geometries. We showcase the validity of eRFT results by comparing them to experimental data. To implement eRFT in ABAQUS, we make use of an existing user subroutine which allows the study of a broader range of intruder materials and shapes. While the subroutine has its limitations, initial comparisons to computational and experimental results are demonstrative.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157201</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of the US Capitol Attack on political views in Argentina, Brazil, and Chile</title>
<link>https://hdl.handle.net/1721.1/157200</link>
<description>Effects of the US Capitol Attack on political views in Argentina, Brazil, and Chile
Garcia III, George Reuben
Is it possible for major political events, such as the U.S. Capitol insurrection on Jan. 6, 2021, to influence political attitudes in other countries? Such events may act as framing devices that influence individuals to think somewhat differently about democracy and populism, primarily by reminding them of domestic shortcomings. Some previous literature has found international attitude effects from major events like terrorism or environmental disasters. In this study, I take advantage of the fact that the insurrection took place in the middle of a set of surveys administered to bureaucrats in Argentina, Brazil, and Chile. The events of Jan. 6 thus act as a type of exogenous shock, thus allowing for an interrupted time series analysis. I find that satisfaction with democracy generally declined across all three countries but only in Chile did support for democracy and elections fall and populist attitudes rise.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157200</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring the angular and spectral reflectance characteristics of color-dynamic films by modifying their photonic texture and topcoat roughness</title>
<link>https://hdl.handle.net/1721.1/157199</link>
<description>Tailoring the angular and spectral reflectance characteristics of color-dynamic films by modifying their photonic texture and topcoat roughness
Blair, Andrew D.
Controlling nano- and microscale morphology is essential for tailoring the appearance of structurally colored stretchy films. An effective approach for controlling the optical properties of such color-dynamic photonic films, which are manufactured holographically, is demonstrated using two simple control handles: the texture of the photonic structure and the surface roughness of a transmissive topcoat. Texture of the photonic structure affects the spectral signature and angular distribution of reflected light. Surface roughness of the topcoat affects the angular distribution of incident and reflected light. Fourier optics concepts are harnessed for modeling and predicting the optical characteristics of the materials as a function of their photonic texture and topcoat roughness. The model is verified with data obtained by imaging the angular scattering distribution and spectroscopic analysis of four representative combinations of photonic texture and surface coat roughness. The findings presented in this thesis validate the hypothesis that controlling texture of the photonic film and roughness of its topcoat allows for tailoring the visual appearance of structurally colored materials. This approach provides access to a rich design space of different appearances, including strong iridescence, color constancy with collimated light sources at small angles of incidence, pure and muted colors, and specular and highly diffuse reflections.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157199</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanics of Three-Dimensional Micro-Architected Interpenetrating Phase Composites</title>
<link>https://hdl.handle.net/1721.1/157198</link>
<description>Mechanics of Three-Dimensional Micro-Architected Interpenetrating Phase Composites
Chen, Andrew Y.
The design of modern composite materials, as used in a wide range of engineering applications, is largely derived from a traditional framework based on laminates. While resulting in desirable strength and stiffness properties, the laminate-based structure leads to a high degree of anisotropy and unique failure modalities like interlaminar failure, limiting the performance of these composites under complex loading conditions. Meanwhile, recent work in the field of architected materials has yielded a thorough understanding of geometry-dependent material behavior, enabling the development of highly robust architectures with tunable (an)isotropy. However, such advances have focused primarily on describing the response of lightweight architected geometries comprised mostly of air. The effect of adding a load-bearing matrix is not well understood. Here we investigate the effect of geometry and constituent material properties on the mechanics of 3D-architected interpenetrating phase composite (IPC) materials, i.e., two-phase materials consisting of an architected structure surrounded by a matrix. Using computational homogenization, we first predict how resultant coupled stress states in the composite change with the material properties of each individual phase and contextualize the results within traditional stiffness scaling laws. We then demonstrate two robust fabrication pathways for realizing polymer- and carbon-based centimeter-scale architected IPCs with micro-scale features. Using these prototypes, we study the mechanical behavior of the fabricated composites under uniaxial compression, with particular emphasis on the non-linear and failure regimes. We show that independent of the material system, the presence of a load-bearing matrix distributes the stress in the composite, contributing to a high-strength, globally stretchingdominated failure behavior, regardless of nodal connectivity. Moreover, the development of a 3D, highly tortuous pathway for stress delays or prevents catastrophic failure of the traditionally brittle architecture phase, resulting in energy dissipation performance of the composite that exceeds the sum of its individual constituents. Finally, we demonstrate that the composite stress state can be architected using geometric design of the IPC and introduce an example of tunable mechanical response in an architected composite inspired by traditional auxetic metamaterials. Altogether, this work broadens our established understanding of the link between architecture and mechanical performance by considering the framework of interpenetrating phase composites, creating the foundation for a new class of strong, resilient, and programmable materials with architected stress states.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157198</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>State and Dynamics Estimation in an Outdoor Multi-Drone Slung Load System</title>
<link>https://hdl.handle.net/1721.1/157197</link>
<description>State and Dynamics Estimation in an Outdoor Multi-Drone Slung Load System
Merton, Harvey
Over the past decade, aerial drones have been used to address problems in areas such as sensing and measurement, inspection, delivery, security, and defense. Adding a load attached to one or more drones using a flexible cable can significantly enhance the capabilities of these platforms. This work aims to develop a multi-drone platform, built on open-source tools such as PX4 and ROS2, that can be used to lift a general slung load in an outdoor environment. Various fidelity simulators, including a pseudo-photo-realistic Gazebo simulator, are developed alongside a functional real world platform for testing load pose estimation methods. A novel cable-based testing apparatus that enables drone translation is used to facilitate stability testing of a quasi-static formation control method for lifting a slung load. This work aims to be the first to use visual feedback to estimate a load’s pose in a multi-drone slung load system operating without external motion capture devices. In simulation, perspective-n-point-based visual estimation achieves position errors of 0.1 m, and geodesic distance attitude errors around 0 ◦ . Real world testing shows errors of 0.2 m and 5 ◦ respectively. Applying extended Kalman filter and unscented Kalman filter formulations, simulated position estimates average around an error of 0 m, while the error noise magnitude is only 6% of the cable length at 0.06 m. Achieving accurate load pose estimates without an inertial measurement unit mounted to the load requires a good cable dynamics model. This work concludes by presenting a novel model for the effect of cables in a drone-slung-load system. A method based on universal differential equations shows promising early results.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157197</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Medical Devices to Improve Oral Delivery of Biopharmaceuticals</title>
<link>https://hdl.handle.net/1721.1/157196</link>
<description>Engineering Medical Devices to Improve Oral Delivery of Biopharmaceuticals
Sharma, Shonit Nair
The dynamic mechanics of the gastrointestinal (GI) tract, including gut contractions, variable pH, and degradative enzymes, significantly challenge the development of oral delivery systems for biologic drugs by compromising their reliable delivery and therapeutic efficacy. While recent advances in oral delivery systems offer improved absorption through tissue penetration, their clinical translation remains tenuous due to the uncertainty of actuation-based delivery in variable environments and inherent design complexity. Inspired by the compression-based toxin delivery system of the stonefish, we developed a simple oral delivery device that harnesses GI mechanics to reliably actuate and systemically deliver biologic drugs. By synchronizing device actuation with gut contractions, the device and tissue work in tandem to ensure the complete transfer of a loaded therapeutic from the device to the tissue, bypassing physical and biochemical barriers and maximizing absorption. Through ex vivo and in silico experiments, we engineered the geometry of the device to achieve safe and targeted injections in the gut. An ex vivo electromechanical simulation model revealed the effectiveness of gut contractions for device actuation, and extensive in vivo experiments involving minipigs demonstrated comparable biologic drug delivery efficacy to subcutaneous injection. Harnessing the dynamic mechanics of the GI tract to improve oral delivery could transform drug administration and significantly enhance the lives of many patients.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157196</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation of Chemical Kinetic Models for Biofuel Oxidation and Pyrolysis</title>
<link>https://hdl.handle.net/1721.1/157195</link>
<description>Automatic Generation of Chemical Kinetic Models for Biofuel Oxidation and Pyrolysis
Dong, Xiaorui
Biofuels hold great promise for reducing greenhouse gas emissions and boosting engine performance. Modeling biofuel combustion and pyrolysis chemistry with chemical kinetic models allows for high-throughput evaluation of their performance under various conditions. These models, typically comprising hundreds of species and thousands of elementary reactions, provide quantitative predictions for the investigated systems. While manually creating such mechanisms is labor-intensive and requires extensive knowledge, reaction mechanism generation software, such as the Reaction Mechanism Generator (RMG), greatly facilitates model development by automating the selection of relevant species and reactions, as well as estimating their thermochemical, kinetic, and transport parameters. Despite the advances in these software packages and their success in modeling small, simple conventional fuels, their application to novel, under-explored biofuels is often limited due to a lack of knowledge in the relevant chemical space. This gap can be bridged by expanding the software's access to accurate thermochemical and kinetic parameters. However, these data are scarce, and their acquisition, typically via quantum chemical calculations, is challenging on a large scale. &#13;
&#13;
This thesis addresses these challenges by developing automated workflows to enhance the calculation of accurate thermochemical and kinetic parameters, thereby extending the capabilities of RMG for biofuel modeling. First, an automatic thermochemistry calculation workflow is implemented and integrated into the chemical kinetic model development process. The significant improvement in computational capacity enables an iterative approach to model generation and refinement, where the thermochemistry of hundreds of molecules is refined in each iteration. This approach is validated through the modeling of light alkene combustion chemistry, resulting in a model that accurately predicts key combustion properties and outperforms other well-regarded models. This study highlights the necessity of sufficient refinement iterations for a comprehensive exploration of the relevant chemical space and the convergence of critical species and reactions in the chemical kinetic model. This approach is then applied to model less-studied biofuels, such as butyl acetate isomers and tetramethylethylene. By incorporating key kinetic parameters from literature and quantum chemical calculations, along with iteratively refined thermochemistry, the developed models demonstrate strong predictive capabilities. These models agree with experiments conducted after their development and reveal important reaction pathways in the studied systems.&#13;
&#13;
Additionally, this thesis enhances the acquisition of accurate kinetic parameters through the simultaneous development of software, datasets, and data-driven models. RDMC, a cheminformatics software, is developed, featuring toolkits for elementary reaction analysis and end-to-end automated workflows for generating molecular and transition state conformers. A dataset covering nine reaction types relevant to combustion and pyrolysis radical chemistry is created using the RDMC workflow. Concurrently, another radical reaction dataset is curated, covering different reaction types. In total, the two datasets introduce high-quality elementary reaction data for over 11,000 radical reactions. Eventually, a graph neural network is trained on the new dataset for fast kinetic parameter estimation that can potentially benefit chemical kinetic model development.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157195</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Hurdles to Highways: Overcoming Barriers to Robotics Adoption in Supply Chains</title>
<link>https://hdl.handle.net/1721.1/157194</link>
<description>From Hurdles to Highways: Overcoming Barriers to Robotics Adoption in Supply Chains
Hegarty, Bartholemew
Macroeconomic events are putting unprecedented pressure on the warehouse industry. Among these are labor shortages, increased operating costs, and the desire for greater customization and higher throughput from these facilities. Focused on these challenges and strategic issues for warehouse applications, this thesis investigates the obstacles to implementing robotic automation in supply chains. The thesis explores this environment and the lens of using three common integration methods. These are the traditional purchase, lease, and emerging robotic-as-a-service (RaaS) model. With these methods in scope, the study incorporates a multicriteria decision-making framework (MCDM) that is built based on an analytical hierarchy process (AHP) and combined with the technique for order of preference by similarity to the ideal solution (TOPSIS). From this framework, the research identifies key decision criteria and their impact on selecting the most suitable integration strategy for automation.&#13;
&#13;
Through a literature review, the study identified the essential criteria for the project design decision. These include infrastructure requirements, system capabilities, usability, provider reputation, project duration, and the total cost of ownership. We then gained insight from industry professionals familiar with automation integration using a focused field study. Furthermore, we underlined practical issues and general opinions on the criteria and how well they correspond to their integration plans. The results highlight notable trade-offs in the decision criteria, emphasizing the need for a more tailored strategy to make automation adoption more efficient.&#13;
&#13;
This thesis provides an effective decision support system to guide the choice of appropriate automation solutions. It helps clarify how decision makers give the most importance to different criteria when implementing robotic automation. The research findings offer helpful details for practitioners navigating the challenging warehouse automation environment. This, therefore, encourages better informed and more efficient decision-making procedures.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157194</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Bayesian Inference of Reaction Networks via Guiding</title>
<link>https://hdl.handle.net/1721.1/157193</link>
<description>Automatic Bayesian Inference of Reaction Networks via Guiding
Arya, Gaurav
Jump process models based on chemical reaction networks are ubiquitous, especially in systems biology modeling. However, performing inference on the latent variables and parameters of such models is challenging, particularly when the observations of the system state are noisy and incomplete. This thesis presents CatalystFitting, a system for inferring the latent variables and parameters of stochastic reaction network models given observational data. CatalystFitting provides primitives for performing changes of measure on jump processes. Building on top of these primitives, CatalystFitting further provides a library of strategies for guiding a jump process to match an observation set. These strategies exploit the form of the underlying symbolic reaction network to automatically produce guides optimized to the particular reaction network structure of interest to the modeler, accelerating otherwise costly Bayesian inference procedures. We present inference results on a bistable switch system and a repressilator system.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157193</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>GPU-Oriented Algorithms for Continuous Energy Monte&#13;
Carlo Neutron Transport</title>
<link>https://hdl.handle.net/1721.1/157192</link>
<description>GPU-Oriented Algorithms for Continuous Energy Monte&#13;
Carlo Neutron Transport
Ridley, Gavin
The advent of graphics processing units (GPUs) has brought computing to new heights with deep learning models, now deployed ubiquitously and touching the lives of many. While GPU hardware may be ideal for deep learning, its full potential in various scientific computing applications has yet to be realized. Often, paradigm shifts in the data formalisms and algorithmic choices used to solve scientific computing problems must take place to fully leverage GPUs. A quintessential example of this shift has been the move towards matrix-free, high-order finite element formulations researched under the Exascale Computing Project. Similar groundbreaking shifts are only starting to take place in continuous energy Monte Carlo (MC) neutron transport simulations. These simulations play a crucial role in designing fission, fusion, and security systems that may play a pivotal role in the transition to a decarbonized world. This work contributes to adapting continuous energy MC neutron transport simulations for the GPU computing era. We first summarize some changes made to other scientific computing applications that led to performance gains on GPUs, which informed our independent development of a CUDA-based version of OpenMC, an open-source continuous energy MC neutron and photon transport code. Fortunately, the historical event-based MC simulation modality developed extensively through the 1980s for vector computers provides an excellent basis for GPU computing. Adapting a full-physics, continuous energy MC neutron transport simulation for GPUs is a feat only completed by a few institutions across the world, so we share some software development tricks that facilitated this task. We then identify a variety of algorithmic optimizations that improved the performance of the baseline CUDA application, and identify areas for further development. 3 Based on experience adapting a full-physics continuous energy MC code for GPU, we identify two pieces of the simulation which can be improved for GPU computing: resonance upscatter handling and unresolved resonance modeling. Our new method for modeling resonance upscatter is based on a novel, fundamental observation regarding the resonance upscatter effect. The relative speed tabulation (RST) method developed by other GPU MC researchers can be underpinned by a universal special function we have named the incomplete Faddeeva function, which is closely related to the incomplete Goodwin-Staton integral. Our research develops numerical algorithms for efficient, accurate computation of the incomplete Faddeeva function and identifies some properties of the function. We then present a specialized root-finding algorithm that takes advantage of the structure of the problem to efficiently sample the resonance upscatter effect on GPUs. This obviates the need to rely on RST tables or a zero kelvin pointwise cross section, freeing precious GPU memory while using a GPU-friendly memory access pattern. Continuing in the same direction, we focus on unresolved resonance region (URR) crosssection modeling, which was shown to induce a 30% computational efficiency degradation on GPUs. We review the requirements to model cross sections in the unresolved resonance regime, and provide what is to our knowledge the first rigorous demonstration that URR modeling can be reduced to a one-dimensional probabilistic model in addition to some expectation values of partial cross sections conditioned on the total. Through three asymptotic arguments covering different resonant behavior regimes, we show that the normal inverse Gaussian distribution is the natural choice for modeling the total neutron cross-section distribution. Rather than inducing a performance degradation, we show the new URR modeling technique in fact outperforms a pointwise infinite-dilute approach when it is used to model the URR region.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157192</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning Multimodal Extraction of Reaction Data</title>
<link>https://hdl.handle.net/1721.1/157191</link>
<description>Deep Learning Multimodal Extraction of Reaction Data
Wang, Alex
Automated extraction of structured information from chemistry literature is vital for maintaining up-to-date databases for use in data-driven chemistry. However, comprehensive extractions require reasoning across multiple modalities and the flexibility to generalize across different styles of articles. Our work on OpenChemIE presents a multimodal system that reasons across text, tables, and figures to parse reaction data. In particular, our system is able to infer structures in substrate scope diagrams as well as align reactions with their metadata defined elsewhere. In addition, we explore the chemistry information extraction potential of Vision Language Models (VLM), which allow powerful large language models to leverage visual understanding. Our findings indicate that VLMs still require additional work in order to meet the performance of our bespoke models.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157191</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building a Scalable Electrification Infrastructure in&#13;
Logistics</title>
<link>https://hdl.handle.net/1721.1/157190</link>
<description>Building a Scalable Electrification Infrastructure in&#13;
Logistics
Alam, Muhammad Ashhad
The transportation sector in the US contributes to about a third of all greenhouse gas emissions, about a quarter of which stems from road freight. A major driver of this environmental footprint remains a heavy reliance on trucking—the least fuel-efficient mode of transportation. A key pathway toward freight decarbonization, therefore, involves shifting from internal combustion engines (ICE) to electric powertrains in truck fleets. This work develops analytics-based solutions to support and assess the electrification of long-haul logistics operations, by applying the methods to PepsiCo’s operations in Texas.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157190</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying Correctness of the Number Theoretic Transform and Fast Number Theoretic Transform in F⋆</title>
<link>https://hdl.handle.net/1721.1/157189</link>
<description>Verifying Correctness of the Number Theoretic Transform and Fast Number Theoretic Transform in F⋆
Ono, Rick R.
As engineers continue to develop more sophisticated algorithms to optimize cryptographic algorithms, their often simple mathematical specifications become convoluted in the algorithms, from which a class of correctness bugs arise. Because cryptographic algorithms often secure sensitive information, their correctness, and in turn their security is a top priority. The Number Theoretic Transform (NTT) is an algorithm that enables efficient polynomial multiplication and has recently gained importance in post-quantum cryptography. This thesis presents a proof of correctness of the NTT in F⋆ , a proof-oriented programming language that extracts to OCaml, and shows that we can use the NTT to perform polynomial multiplications. We provide an implementation of the Cooley-Tukey fast NTT algorithm and a proof that it matches the original NTT specification. This thesis also presents a representation of polynomials in the F⋆ subset Low*, which extracts to performant C code.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157189</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous UAV Navigation using Millimeter Wave&#13;
Radar</title>
<link>https://hdl.handle.net/1721.1/157188</link>
<description>Autonomous UAV Navigation using Millimeter Wave&#13;
Radar
Herrera, Joshua I.
We present the design, implementation and evaluation of MilliNavigator, an autonomous navigation system for drones capable of mapping, path-planning, self-localizing, and navigating in indoor environments by leveraging strategically-placed millimeter wave anchors. Autonomous drones are an increasingly relevant tool for completing and automating hard-to-reach tasks. State of the art navigation systems rely primarily on cameras and GPS for environmental perception and self-localization. These solutions can impose restrictions on existing systems, which limit their navigable environment to well-lit, outdoors, and unobstructed paths. This thesis presents MilliNavigator, the first system to use millimeter wave radar and anchor-aware path planning to achieve high accuracy, 6DOF, online localization. By generating a localization precision score map from known anchor deployments, the system jointly optimizes travel distance and localization performance. We implemented and evaluated MilliNavigator on a drone built with commercial, off-the-shelf parts. We ran over 165 successful missions across 7 different tag deployments. Our system successfully achieved 7.9cm overall median error and had a 90th percentile error of less than 21cm.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157188</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Exocompilation for Performance Engineers in&#13;
User-Schedulable Languages</title>
<link>https://hdl.handle.net/1721.1/157187</link>
<description>Practical Exocompilation for Performance Engineers in&#13;
User-Schedulable Languages
Qian, Kevin
High performance computing libraries provide efficient implementations of common computational kernels. Traditionally, such libraries are written in C or assembly. User-schedulable languages provide performance engineers a productive way to optimize these kernels with welldesigned interfaces which provide users control over performance-relevant decisions and automate unnecessary concerns. Often, this is a tradeoff: too much control with too little automation is tedious to program, and too much automation with too little control will hinder obtaining peak performance. The principle of exocompilation advocates for one end of the extreme: to give performance engineers maximal control over code execution so they can maximize performance, its current implementation in existing systems is impractical to use. This thesis broadly explores ways to make exocompilation a practical solution for performance engineers. We show that providing more control does not necessitate sacrificing automation, as long as the language is designed so that users can build their own automation. We explore the necessary design features to enable such a system, demonstrate the types of automation users can build in the system, and brainstorm ways to further push the amount of control user-schedule languages expose to the user.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157187</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>satdatagen: a Python Library for Satellite Sensor Task&#13;
Scheduler Support</title>
<link>https://hdl.handle.net/1721.1/157185</link>
<description>satdatagen: a Python Library for Satellite Sensor Task&#13;
Scheduler Support
Golden, Adina H.
The number of objects in Earth’s orbit is increasing rapidly, raising urgency for intensified observations of satellites and other resident space objects (RSOs) to manage space traffic and prevent collisions. Current methods for RSO detection and tracking rely on ground-based and space-based observatories with optical or radar sensors, but these telescopes require complex scheduling to achieve surveillance of all objects. Previous works have implemented scheduling algorithms and machine learning models that optimize the assignment of tasks to the sensors for RSO observations. However, prior methodologies rely on different datasets, making it hard to make comparisons across methods. This paper presents satdatagen: a software package that generates datasets that can be used as inputs to sensor task schedulers. The datasets generated from the satdatagen library are intended to be used as a baseline input to satellite sensor task schedulers. The datasets contain information about every satellite that passes in view of the sensor such as its angle of altitude and its brightness. Additionally, actual cloud cover data is included for optical telescopes that need to take visibility into account while scheduling observations. satdatagen is simple to use, and does not require excess outside knowledge from developers of scheduling tools.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157185</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motion Phantom Development for MRI</title>
<link>https://hdl.handle.net/1721.1/157184</link>
<description>Motion Phantom Development for MRI
Liu, Kerlina
The development of magnetic resonance imaging (MRI) has enabled health care professionals to non-invasively visualize subjects' soft tissue for medical diagnosis. Since it's conception, artifacts due to patients' movement have shown themselves to be an issue and an assortment of tools and methods have been developed to help mitigate the effect of motion on MRI but such mitigation methods are generally only applicable on a case by case basis depending on the specific type of motion. As such, additional research is required to develop novel methods and a standardized method of testing, validating, and ultimately comparing mitigation strategies.&#13;
&#13;
This work provides a design to develop a motion stage as well as build instructions for the Martinos head phantom which moves in four degrees of freedom (linear translation in the plane parallel to the floor, a head shaking "no" motion, and a head nodding "yes" motion) independently of one another to limited success. Only the translation in direction (into and out of the bore hole, along the z-axis) worked as expected, while the translation perpendicular to it (x-axis) did not. The total range of motion that head phantom was capable of turning in the head shaking/"no" motion was approximately 19 degrees, though the torque required is on the higher end (on the order of 0.06 N*m) and the position of the rotational actuator needs some reexamination. The head nodding/"yes" mechanism is more promising, allowing for a tilt downwards of 1 degrees and upwards of 2 degrees, but requires actuators capable of exerting 6N of force or more.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157184</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Streamoscope: A Low-Cost, Open-Source, USB-3-Capable Streaming Data Acquisition System for Low-Field MRI</title>
<link>https://hdl.handle.net/1721.1/157183</link>
<description>Streamoscope: A Low-Cost, Open-Source, USB-3-Capable Streaming Data Acquisition System for Low-Field MRI
Feld, Joseph W.
Magnetic Resonance Imaging (MRI) is a powerful, safe imaging technique based on using magnetism to provide contrast between soft tissues. Portable, low-field MRI is a growing area that has already demonstrated value in both educational and clinical domains. Low-field MRI systems need to acquire data with sample rates in the tens of megahertz, which can make the data acquisition system the bulk of the overall cost of low-cost systems. This work presents the Streamoscope: an open-source data acquisition system designed for low-field MRI that streams two 14-bit resolution channels at 60 megasamples per second over USB-3 into Python. It is approximately $300 in parts, about a quarter of the price of the cheapest data acquisition system on the market that would work in our case study. The Streamoscope can stream full-sample-rate raw MRI data into a computer to be processed in Python, enabling real time imaging. The system has been validated by generating 2D images of a phantom on a system with an 8 MHz Larmor frequency.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157183</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Health Divide: Achieving Equitable Healthcare Access in Kenya through Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/157182</link>
<description>Bridging the Health Divide: Achieving Equitable Healthcare Access in Kenya through Artificial Intelligence
Nyakiongora, Geoffrey Mosoti
This research explores the innovative application of Artificial Intelligence (AI), particularly Generative Pre-trained Transformer (GPT) models, in designing culturally sensitive hospitals for rural Kenya. The research addresses the critical need for improved healthcare infrastructure in underserved areas, focusing on the potential of AI to create efficient, adaptable, and contextually appropriate hospital designs. The study employs a mixed-methods approach, combining qualitative analysis of cultural practices and healthcare needs with quantitative data on environmental factors and health statistics. A GPT model is developed and fine-tuned on a comprehensive dataset of Kenyan cultural information, healthcare data, and architectural knowledge. This AI model is then used to generate hospital design concepts that are evaluated against newly developed cultural sensitivity metrics. Key findings demonstrate the potential of AI to significantly reduce design time, improve space utilization, and enhance cultural appropriateness in hospital designs. The thesis also highlights the importance of human-AI collaboration, with local experts and community representatives playing crucial roles in refining and implementing AI-generated concepts. Challenges identified include data quality and availability in rural settings, the need for ongoing model refinement, and the importance of establishing ethical guidelines for AI use in healthcare design. The thesis concludes with a set of recommendations for implementing AI-driven, culturally sensitive hospital design processes in rural Kenya, including the development of specialized AI models, and establishment of collaborative design methodologies. These findings have significant implications for improving healthcare infrastructure in resource-constrained settings and offer a model for culturally sensitive, AI-driven architectural design in developing contexts globally.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157182</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Shape of Kubler: George A. Kubler in Peru, 1948-49</title>
<link>https://hdl.handle.net/1721.1/157181</link>
<description>The Shape of Kubler: George A. Kubler in Peru, 1948-49
Schweig, Johann
Yale art history professor George Kubler’s seminal 1962 publication The Shape of Time is, according to his own words, representative of a “crossroads between the history and anthropology of art.” This work does not stand alone, but is rather part of a larger corpus of study through which Kubler recurred to disciplines, methods and tools outside of what is traditionally considered art historical—including anthropology, architectural representation, and biology—in order to generate new readings and understandings of the history of South and Central American art. This thesis takes a look into a year of Kubler’s life in 1948-49, spent in Peru conducting archival research and field work on culture change with the Institute for Social Anthropology at the Smithsonian Institution and teaching a seminar on the use of archival sources in ethnology at Universidad Nacional Mayor de San Marcos in Lima; during this time, Kubler also engaged in the construction of an archive of his own. Drawing from correspondence and other records from the period in question, a series of lost episodes resurface, providing a reconstruction of various strata of 1940s Peruvian society: an increasingly cosmopolitan Lima stands in stark contrast to the underdeveloped, feudal Andean world, evidencing its colonial underpinnings. I contend that witnessing the coexistence of various temporalities within a single geographic territory had a significant impact on Kubler’s later theories on spatialized historical time.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157181</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Computing for Building Performance and Design</title>
<link>https://hdl.handle.net/1721.1/157180</link>
<description>Spatial Computing for Building Performance and Design
Weber, Ramon Elias
Accommodating urban population growth while reducing emissions from the built environment poses an unprecedented challenge to the architectural discipline. To enable more sustainable construction, the dissertation proposes a new computational design framework to investigate how building performance from an environmental and user perspective relates to spatial design. The dissertation surveys existing computational methodologies for design automation and identifies new opportunities and value propositions for architectural computing in design guidance, feedback, and optimization. Exploring methods that can be used to generate and optimize structural systems of buildings and interior layouts, a specific focus lies in the design of residential buildings. By applying generative design methods to building analytics, new ways for estimating the embodied carbon of a building and the environmental impact of system-level design choices can be explored.&#13;
First, the research demonstrates how generative geometric algorithms can be coupled with structural simulations to accurately predict the structural material quantity and, through that, the embodied carbon of a building in early stages of design. Second, a new method for representing, analyzing, and generating spatial layouts – the hypergraph – is proposed, that captures the characteristics of any given floor plan. Unveiling new architectural opportunities through automatic geometry creation, the hypergraph shows potential to improve the quality of residential spaces in terms of environmental performance and access to daylight. Enabling new design tools for architects, it offers creative applications and new collaborative workflows for incorporating new spatial metrics in the design process. Allowing for new quantitative insights in building performance, the research demonstrates that spatial efficiency can outperform envelope upgrades in terms of carbon emission savings.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157180</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beans to Bytes: Grey-Box Nonlinear System&#13;
Identification Using Hybrid Physics-Neural Network&#13;
Models</title>
<link>https://hdl.handle.net/1721.1/157179</link>
<description>Beans to Bytes: Grey-Box Nonlinear System&#13;
Identification Using Hybrid Physics-Neural Network&#13;
Models
Pronk, Morgen
The advancement of neural networks in the last several years has yielded some astonishing results. However, the applicability to system identification and modelling dynamical systems still has a great amount of room for exploration. This thesis reviews different neural network architectures and their application to complex non-linear dynamic system identification. In particular, it uses the intricate process of coffee roasting as a case study to explore and demonstrate these techniques. Coffee roasting is a complex process that requires precise control to achieve the desired coffee quality. The ability to develop models that represent a system, i.e. system identification, is of great value to industry. Coffee roasting poses several challenges for system identification from complex chemical reactions occurring inside the bean, to temperature trajectories being dependent on several states that cannot be explicitly measured, such as moisture content, or reaction rate, making it an ideal candidate for exploring the application and limitations of neural networks. The primary contributions of this study are a proposed "grey-box" model that augments previously established physics based models, as well as illustrating the limits of LSTM, Deep NARX models using "one-step" forward prediction techniques. Although the study focuses explicitly on coffee roasting, the conclusions drawn are applicable to other similarly complex industrial and manufacturing processes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157179</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Agent Reinforcement Learning for Autonomous Robotics</title>
<link>https://hdl.handle.net/1721.1/157178</link>
<description>Multi-Agent Reinforcement Learning for Autonomous Robotics
Vincent, Caroline R.
Technological advancements in autonomous robotics, including autonomous vehicles, have created new opportunities for innovative solutions to many everyday challenges. The impact of integrating robotic agents into real-world applications may be significantly enhanced by leveraging advancements in multi-agent autonomous systems. However, the coordination required in multi-agent systems demands complex motion planning to deconflict actions and prevent collisions of vehicles moving at increasingly high speeds. This thesis explores the application of multi-agent reinforcement learning (MARL) to autonomous robotics by teaching a central controller to navigate multiple agents across various environments without collisions. The simulated scenarios range from simple, obstacle-free environments to complex environments with obstacles configured to form narrow passageways or represent other complexities in dense urban environments. The findings demonstrate the potential of MARL to achieve high accuracy in navigating these different environments, highlighting the method's flexibility and adaptability across diverse settings and the resulting implications for applying MARL to real-world scenarios.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157178</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Study on Deploying Large Language Models as Agents</title>
<link>https://hdl.handle.net/1721.1/157177</link>
<description>A Study on Deploying Large Language Models as Agents
Cao, Jiannan
This thesis investigates the deployment and utilization of Large Language Models (LLMs) as agents, exploring their potential in automating workflows and enhancing user interactions. The study begins with an in-depth analysis of language models, tracing their evolution from pure statistical models to advanced neural network architectures like Transformers and their bidirectional variants. It then delves into the operational framework of LLM agents, detailing user interactions, environmental considerations, memory management, task planning, and tool use. The study addresses critical limitations in LLM inputs, such as the context window and introduces Retrieval-Augmented Generation (RAG) as a solution to extend the model’s capability. Key APIs provided by OpenAI for deploying GPT models are discussed, highlighting their functionalities and applications. Finally, the practical application of LLMs in creating Robotic Process Automation (RPA) workflows is demonstrated through a divide-and-conquer methodology, showcasing the efficiency, scalability, flexibility, and accuracy of this approach. This comprehensive study underscores the transformative impact of LLMs in automating complex processes and enhancing user experiences through intelligent agent deployment.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157177</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nature-Centered Materiomics: Experimental and&#13;
Computational Design</title>
<link>https://hdl.handle.net/1721.1/157176</link>
<description>Nature-Centered Materiomics: Experimental and&#13;
Computational Design
Shen, Sabrina C.
As of the year 2020, the accumulated mass of anthropogenic materials now outweighs all living biomass on Earth. Industrial material production simultaneously contributes nearly 30% of global greenhouse gas emissions each year, which in conjunction with solid waste accumulation and deterioration of ecological processes, threatens the livelihood of current and future generations of both human and non-human species. This is in dramatic contrast with natural materials, which consistently outperform human engineering, yet are invariably produced using abundant, renewable sources of energy and upon their disuse, decompose to fuel new growth. Nature effectively forms sustainable supply chains with no waste by leveraging both the constituents of materials and their structural organization at multiple scales, architecting common and abundant building blocks into a variety of high-performing composites. In this thesis, we present a nature-centered materiomics approach to emulate this in the design of novel sustainable materials. We leverage both computational and experimental strategies to consider multiple length-scales and time-scales across the processing, structure, properties, and performance of material systems with minimal ecological impact. First, we demonstrate machine learning strategies for harnessing functional geometries in natural materials and demonstrate how interpretable models can be leveraged toward novel material design. Next, we develop a platform for the fabrication of tunable biocomposites composed of renewable and biodegradable feedstocks, and consider Bayesian optimization as an approach to guide composite optimization and design. Finally, we extend the fabrication system to hybrid-living materials and demonstrate dynamic bio-welding capabilities in the strongest mycelium-based material in the literature to-date. Altogether, these contributions enhance multiscale understanding of nature-centered material design and pave the way for future innovations that align human engineering with regenerative material cycles.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157176</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-Shelf Exchange Driven by Dense Flow Down a Canyon</title>
<link>https://hdl.handle.net/1721.1/157175</link>
<description>Cross-Shelf Exchange Driven by Dense Flow Down a Canyon
Mier, Christian M.
Laboratory experiments investigated the dynamics controlling the cross-shelf exchange in a prograde sloping canyon induced by dense shelf water descending into the canyon. This thesis is motivated by the dispersal of dense water generated by polynyas on the Arctic and Antarctic continental shelves. Laboratory results corroborate prior numerical results suggesting that canyons are hotspots of cross-shelf exchange. When the dense water descends a canyon, it induces an onshore return flow of offshore water into the canyon. This return flow is initially driven by the dense water eddies descending the canyon and acting like a bucket brigade. At later times, another mechanism may also be at play where large dense cyclonic (anticlockwise) eddies on the northern continental shelf may pull more dense water out of the canyon producing a region of low pressure, near the canyon head, which induces an increase in ambient flow into the canyon. The Burger number (Rossby radius of deformation/canyon width) and the dense water source location with respect to the canyon head affect the offshore ambient water velocity up the canyon. Additionally, as the offshore water reaches the canyon head, the offshore water volume flux becomes larger than the dense water volume flux, possibly due to the low pressure region described above. Understanding these dynamics in the Antarctica region is of global significance for two main reasons: 1. The offshore flowing dense water forms Antarctic Bottom Water and thus affects the global meridional circulation; 2. The onshore heat transport induced by the return flow drives glacial ice melt and therefore contributes to sea level rise.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157175</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shifting Paradigms: Data-Centric Approach for Marine Statics Correction using Symmetric Autoencoding</title>
<link>https://hdl.handle.net/1721.1/157174</link>
<description>Shifting Paradigms: Data-Centric Approach for Marine Statics Correction using Symmetric Autoencoding
Kanniah, Brindha
Deep learning has demonstrated remarkable performance in a wide variety of domains and is often leveraged for making high-stakes decisions. Parallel to its growing and beneficial presence in other domains, deep learning is gaining a notable reputation for solving challenging problems in geophysics. A key problem - given the escalating energy and geosequestration demands in present times - is marine statics correction. The traditional workflow for correcting marine statics has been based on a model-centric paradigm. This paradigm involves a series of transformations between non-commensurate spaces: first, inversion from seismic data space to velocity model space and second, forward modeling from velocity model space to seismic data space. Statics correction within this paradigm has severe drawbacks, mainly the high compute, time and labor cost, and inaccuracies stemming from errors in velocity model inversion or from unmet assumptions about subsurface structure. Overcoming these drawbacks was thus, the prime motivation for our study - where we chose to leverage deep learning as the core algorithmic tool to understand the limits of the model-centric paradigm and explore the performance horizons of a different, data-centric, paradigm to statics correction. The main feature of the data-centric paradigm is the direct mapping between commensurate data spaces, eliminating the need for intermediary transformations to and from velocity model space. Initial benchmark tests on the model-centric approach revealed the impact of inaccuracies in velocity model inversion as substantial nonzero timeshifts - exceeding 0.01s, and reaching values as large as 0.04s - for most arrivals in seismic data. These arrival time precision levels are unacceptable for good seismic imaging and time-lapse analysis; underscoring the need for an improved approach to marine statics correction. Consequently, we began our investigations into the data-centric paradigm. With the focus of disentangling the effects of varying seawater velocity from coherent subsurface geology in seismic records, we implemented an autoencoder algorithm, named SymAE. Notably, SymAE leverages the permutation symmetry of coherent subsurface information to perform the separation of information from nuisance variations. Once trained, SymAE is able to redatum selected subsurface and water velocity information in its latent space to produce statics-corrected seismic records. Our results show that for training datasets of increasing subsurface complexity, SymAE strongly converges all dynamic timeshifts to zero, aligning perturbed traces to reference traces. Crucially, SymAE delivers the required timeshift precision of 0.01 seconds for all arrivals - an achievement that the model-centric approach falls short of. This notable precision improvement using SymAE highlights how a streamlined data-centric paradigm outperforms the traditional model-centric paradigm of marine statics correction. This finding is pivotal as it is the foundation that lays the groundwork and opens the path towards the real-world deployment of SymAE for statics correction in challenging deepwater environments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157174</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding tumor cell plasticity in spatial transcriptomics with graph attention networks and walk-based pseudotime analysis</title>
<link>https://hdl.handle.net/1721.1/157173</link>
<description>Understanding tumor cell plasticity in spatial transcriptomics with graph attention networks and walk-based pseudotime analysis
Zamora, Izabella
Tumor cell plasticity in cancer is a key driver in tumor progression, heterogeneity, metastasis, and treatment resistance. Tumor cells change states from the conventionally easier to treat epithelial state to the more resistant mesenchymal state. Understanding the transition dynamics of these states and the extrinsic factors influencing them is crucial for improving therapeutic strategies and patient outcomes. Utilizing spatial transcriptomics extrinsic driving factors of plasticity can be probed. We introduce PlastiNet, which uses a graphical attention-based network to create a spatial aware embedding. The utility of our approach is validated in model systems, specifically in the brain and colon, where it successfully identifies biologically relevant neighborhoods and maps differentiation pathways. When applied to pancreatic ductal adenocarcinoma (PDAC), distinct, conserved neighborhoods within the tissue, including diverse immune and cancer clusters. By estimating a differentiation path from epithelial to mesenchymal-like cells, we can identify intermediate states despite a limited set of tumor marker genes. This cellular differentiation path shows enrichment and depletion of certain cell types within local neighborhoods aligning with known correlations, and by leveraging inferred ligand-receptor interactions, we can pinpoint potential drivers of plasticity to test in vitro. PlastiNet effectively generates hypotheses directly from patient-derived spatial transcriptomics samples, offering insights into the cellular mechanisms driving tumor plasticity.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157173</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Origins of the East Greenland Coastal Current on the Northeast Greenland Shelf: a Comparison of Two Reanalysis Products</title>
<link>https://hdl.handle.net/1721.1/157172</link>
<description>The Origins of the East Greenland Coastal Current on the Northeast Greenland Shelf: a Comparison of Two Reanalysis Products
Vianco, Sara L.
The East Greenland Coastal Current (EGCC) carries some of the freshest outflow from the Arctic southward along the East Greenland Shelf and into the Nordic Seas and subpolar North Atlantic. How this fresh water initially flows onto the Northeast Greenland Shelf (NEGS) and feeds the EGCC is not well known due in part to the lack of observations in the region. In this thesis, I use two ocean reanalyses, the Regional Arctic Ocean/sea-ice Reanalysis (RARE) and Global Ocean Physics Reanalysis (GLORYS) to explore the structure and dynamics of the ocean circulation on the NEGS. To validate the use of these products in the region, I compare the reanalysis products to the Fram Strait Arctic Outflow Observatory for the period of 2003-2019. In the mean, RARE is too warm and salty compared to the moorings, while the properties in GLORYS track more closely to the observations. However, the observed velocity field is better represented in RARE than GLORYS. From there, I analyze the cross-shelfbreak flow from 74°N to 81.5°N in the two reanalysis products, and conclude that transport onto the NEGS of waters fresher than 34 salinity is driven by an Ekman circulation that arises from along-shelfbreak winds and a widening shelf south of 81.5°N. The enhanced transport of fresh water also shifts the isohalines across the shelfbreak, directing a geostrophic flow onshelf between 81°N and 79°N. The convergence of fresh water on the NEGS initiates the EGCC as an identifiable and distinct feature around 80°N in RARE, uniting the EGCC along the southwest coast of Greenland and its northern counterpart, the Polar Surface Water (PSW) Jet. In GLORYS, the EGCC is not present throughout the domain, though there is a weak net southward flow on the NEGS. The EGCC in RARE is primarily buoyancy-driven, though the along-coast winds likely play a major role in maintaining the density front that supports the EGCC. Results from this thesis have implications for the transport and fate of Arctic and Greenland-sourced fresh water, and stratification in the high latitude North Atlantic and Nordic Seas.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157172</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neutronic-Thermal Simulation of Micro Reactor Designs for the Purpose of Analyzing the Impact of Thermal Expansion and Hydrogen Migration in Metal Hydride Moderator</title>
<link>https://hdl.handle.net/1721.1/157171</link>
<description>Neutronic-Thermal Simulation of Micro Reactor Designs for the Purpose of Analyzing the Impact of Thermal Expansion and Hydrogen Migration in Metal Hydride Moderator
Kendrick, W. Reed
The recent increased interest in microreactor designs has presented the opportunity to take advantage of the smaller core dimensions to perform steady state neutronic-thermal coupled simulation with the inclusion of an additional physics system. This work accomplishes this, by adding thermal expansion and zirconium hydride-based hydrogen diffusion to the neutronic-thermal simulation of multiple heat pipe microreactor designs. Microreactors’ smaller cores are inherently characterized by more leakage than gigawatt-scale reactor cores. The inclusion of thermal expansion’s representation in the coupling system may reveal neutronic or thermal impacts of geometric expansion that have yet to be noted for these smaller scale geometries. This is the impetus for the work on thermal expansion. The work on hydrogen diffusion is inspired by the common use of zirconium hydride in microreactor designs as a moderator. This material provides high density of hydrogen with high melting point, but features a well documented increase in mobility of hydrogen within the zirconium lattice at high temperatures. Coupling this migration of hydrogen within the neutronic-thermal simulation is performed in order to identify and analyze neutronic and thermal impacts due to the movement of hydrogen within the moderator. Additionally, a heat pipe failure case is simulated for each microreactor geometry studied, aimed to analyze the impacts of multipipe failure on both thermal expansion and hydrogen diffusion, as well as their downstream neutronic-thermal effects.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157171</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-Stack Replacement Across User-Kernel Boundaries</title>
<link>https://hdl.handle.net/1721.1/157170</link>
<description>On-Stack Replacement Across User-Kernel Boundaries
Mohr, Katherine
In large, distributed computations with small amounts of work done at each node, networking latencies quickly add up, especially in comparison to the time taken to execute small tasks. As such, lowering network latencies is crucial to getting good performance. Previous research has shown that often the largest contributors to network latencies are data copies between kernel and application buffers. Conventional wisdom argues that to solve this problem, one should move the networking stack out of the kernel and into the user space or networking hardware. Instead, we build upon an alternative approach, known as LakePlacid. LakePlacid mitigates the kernel-user boundary overhead issue by moving the most important application logic out of the user space and into the kernel. This thesis proposes and implements a key improvement to LakePlacid. Because only part of the application logic is migrated to the kernel, some packets necessarily must be resolved in the standard user space application. The system discussed in this thesis allows packets which cannot be handled in the kernel to seamlessly continue in user space via on-stack replacement, thus preventing side effects from being executed erroneously. This system for on-stack replacement is very general, allowing execution to switch between code versions at any conditional, and it is novel in its ability to switch stacks across the user-kernel boundary. With this change, LakePlacid is able to better maintain the semantics of user applications, making it more feasible in practice.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157170</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models</title>
<link>https://hdl.handle.net/1721.1/157169</link>
<description>Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models
Figueroa, Reinaldo
Language models are initially trained on large datasets, enabling them to extract patterns and establish rich contextual connections. When dealing with data scarcity, transfer learning has become the go-to method to use these models in specialized downstream tasks via fine-tuning. However, fine-tuning on small datasets can lead to overfitting and a lack of generalization. Generalization is crucial when deploying models that perform a sensitive tasks in a real world environment, as it dictates how well it performs on unseen data. Conversely, overfitting is highly likely to occur when training on small datasets. This thesis proposes and evaluates a new method for fine-tuning language models by adaptively choosing specific learning rates for each transformer layer that provide higher performance on in-domain low-volume datasets. Additionally, we explore which layers inside the models usually hold more contextual information from pre-training that might be valuable to keep ‘frozen’ when fine-tuning on small datasets. This analysis provides insights into fine-tuning approaches during initial experiments when data is limited. Our results demonstrate limited performance gains on certain models while achieving more significant gains on others when fine-tuning using our proposed method. Additionally, our work also provides valuable insight into per-layer importance of language models by showing that certain layers have a stronger direct correlation with the overall model accuracy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157169</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Efficacy of Different Analysis Algorithms for Summarizing Online Deliberations</title>
<link>https://hdl.handle.net/1721.1/157168</link>
<description>The Efficacy of Different Analysis Algorithms for Summarizing Online Deliberations
Venkat, Naveen
For the past decade, online deliberation platforms like Polis have expanded the reach of deliberative democracy, which calls for political decisions to be based on the results of fair and balanced discussions among citizens, by enabling larger deliberations. However, as these discussions often generate a large volume of comments, which is infeasible for policymakers to thoroughly review, these platforms often include analysis algorithms that distill the conversation into a small set of comments, which policy-makers can use as the base of citizen input into decision-making. While Polis currently provides a clustering-analysis summary of the discussion, two newer aggregation algorithms, inspired by computational social choice theory and abstract argumentation theory, have recently been proposed. These algorithms seek to provide more representative (i.e. portraying all perspectives) and consistent (i.e. comments within a perspective do not oppose each other) summaries of the discussion, respectively. Still, though these newer algorithms may have theoretical advantages over Polis’s current methods, they have yet to be evaluated in a real-world application. Through a randomized controlled trial of all three approaches using a nationally representative sample, we compare their practical effectiveness, as measured by participants’ subjective experiences regarding how well these summaries represent their concerns. We find that the computational social choice-inspired algorithm consistently outperforms Polis’s current methods in this regard, though future theoretical work is still needed to fully adapt this approach to a real-world setting.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157168</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adversarial Prompt Transformation for Systematic&#13;
Jailbreaks of LLMs</title>
<link>https://hdl.handle.net/1721.1/157167</link>
<description>Adversarial Prompt Transformation for Systematic&#13;
Jailbreaks of LLMs
Awoufack, Kevin E.
The rapid integration of Large Language Models (LLMs) like OpenAI’s GPT series into diverse sectors has significantly enhanced digital interactions but also introduced new security challenges, notably the risk of "jailbreaking" where inputs cause models to deviate from their operational guidelines. This vulnerability poses risks such as misinformation spread and privacy breaches, highlighting the need for robust security measures. Traditional red-teaming methods, involving manually crafted prompts to test model vulnerabilities, are labor-intensive and lack scalability. This thesis proposes a novel automated approach using Reinforcement Learning from Human Feedback (RLHF) to transform unsuccessful adversarial prompts into a successful jailbreak. Thus it learns a policy based on relation to existing jailbreak prompts that informs the generator LLM of what makes an adversarial prompt successful. This was implemented using Proximal Policy Optimization (PPO) and tested with both a classifier and judge reward model, attaining at best a 16% attach success rate on a target model. This research can be applied to any prompt at the word level and further analyzed on characteristics of toxicity. This work contributes to advancing LLM security measures, ensuring their safer deployment across various applications.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157167</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Computational Tool for Simplifying Engineering Tradeoff Analysis for the Design of Cost-Optimized, Time-Variant, Electrodialysis Reversal Desalination Systems</title>
<link>https://hdl.handle.net/1721.1/157166</link>
<description>Development of a Computational Tool for Simplifying Engineering Tradeoff Analysis for the Design of Cost-Optimized, Time-Variant, Electrodialysis Reversal Desalination Systems
Costello, Jeffrey
This study presents an analytical tool for characterizing a wide swath of the designspace for time-variant electrodialysis reversal brackish water desalination (TEDR) while avoiding the computation time oft required by mechanistic models of electrodialysis reversal (EDR) and time-variant processes. In place of explicit computation, this paper proposes a simplifying assumptions to simulate desalination power and production rate of a TEDR process without explicit computation, enabling rapid year-long simulation and system optimization. The output of the model is compared to experimental data from a pilot TEDR system and found to have good agreement between desalination power and production rate. Disagreement between the modeled and experimental pressure losses suggesting additional losses in the experiment which may be accounted for in future work. Two case studies, one case for potable water in the American Southwest and another case for irrigation water in the Middle-East and North Africa (MENA) region, compare the results from 54 optimized systems. The results illustrate the complexity of system design and selection, elucidating tradeoffs between different models of electrodialysis (EDR) stacks, operating modes, and system configurations. The output of this model will enable system designers to confidently design and implement cost-effective TEDR systems to combat rising global freshwater scarcity.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157166</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Process Substitution on Manufacturing Costs: A&#13;
Comparative Analysis of Sheet Metal Forming versus Extruded Steel Cutting</title>
<link>https://hdl.handle.net/1721.1/157164</link>
<description>The Impact of Process Substitution on Manufacturing Costs: A&#13;
Comparative Analysis of Sheet Metal Forming versus Extruded Steel Cutting
Talal, Omar
Sheet metal manufacturers continuously seek methods to enhance automation and reduce costs. This thesis explores process substitution and design standardization through a parameter-driven cost model and case studies applying Design for Manufacturability &amp; Assembly (DFMA) principles. Specifically, it evaluates substituting conventional sheet metal components with extruded steel profiles and replacing manual press brake operations with automated tube laser cutting. The findings show that tube laser adoption across a broad range of channels can reduce costs by 49% to 79%, with a payback period of under two years, even in scenarios with fluctuating raw material prices. The study proposes strategies for maximizing tube laser utilization through product mix analysis, redesign for compatibility, and designing with tube laser as the primary method. A developed automation tool using clustering aids profile identification, though the study highlights the need for improved data management around C-channel dimensions to enhance process standardization. The investigation confirms that extruded steel can be a cost-effective alternative to large-scale channel products, providing solutions for industry transition through direct replacement, compatibility-focused redesign, or design guidelines optimized for extruded steel.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157164</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Koopman Operator Theory to Legged Locomotion</title>
<link>https://hdl.handle.net/1721.1/157163</link>
<description>Application of Koopman Operator Theory to Legged Locomotion
Terrones, Jasmine G.
Nonlinearities from complicated robot systems and harsh contact dynamics have long impeded the effectiveness of optimal control strategies for legged robots. In this work, we present a linearized simple walking model using Koopman Operator Theory, and its usage in Linear Model Predictive Control (L-MPC). Various walking and contact models were evaluated, but ultimately the rimless wheel was selected due to its inherent stability and low dimensionality, and a nonlinear viscoelastic model was used to accurately capture floor contact and impact dynamics. Koopman models were developed using both Radial Basis Functions (RBFs) and neural network-generated observables for the passive rimless wheel. A novel actuation method with linear actuators, combined with the Control Coherent Koopman methodology, resulted in accurate linear models that effectively enabled L-MPC to control the wheel on flat ground. This model outperformed those created using the more traditional Dynamic Mode Decomposition with Control method. This work demonstrates the power of Koopman linearization to produce a unified set of linear dynamical equations that encompass various contact and non-contact configurations and demonstrates the effectiveness of the Control Coherent Koopman methodology in generating an accurate input matrix across these different contact modes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157163</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Insights on Serology, CRISPR Diagnostics, and Machine Learning Architectures for Biological Sequences</title>
<link>https://hdl.handle.net/1721.1/157162</link>
<description>Insights on Serology, CRISPR Diagnostics, and Machine Learning Architectures for Biological Sequences
Siddiqui, Sameed Muneeb
Fueled by technological breakthroughs, advancements in our understanding of infectious agents offer unprecedented potential for their early detection, intervention, and ultimately, eradication. This dissertation focuses on combining cutting-edge immunological, diagnostic, and computational approaches to confront infectious diseases more effectively, with a particular emphasis on SARS-CoV-2. The first two chapters delve into the immunological aspects of SARS-CoV-2, exploring the dynamics of antibody responses during primary infection and reinfection. First, we explore the dynamics of antibody responses during primary infection, revealing a “switch-like” relationship between antibody titer and function. Next, we investigate the humoral immune response following reinfection, identifying specific biomarkers that differentiate between primary infection and reinfection, offering potential tools for monitoring disease spread and understanding immunity. The subsequent chapter shifts focus towards technological innovation in diagnostics, presenting a novel bead-based method for CRISPR diagnostics that leverages a split-luciferase reporter system for enhanced sensitivity and a highly deployable bead-based platform for multiplexed pathogen detection. This work represents a significant advancement in rapid, scalable, and portable diagnostic tools. Finally, the dissertation culminates with a leap into computational biology, introducing ’Janus,’ a subquadratic state space model designed to efficiently handle large biological sequences. Janus demonstrates superior performance in genomics and proteomics tasks, outperforming existing models with significantly fewer parameters, thus paving the way for more efficient and accurate modeling of protein behavior and other biological processes. Collectively, these works contribute to the broader field of infectious disease research with new immunological insights paired with advances in technological and computational solutions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157162</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design, simulation, and testing of a low cost laser&#13;
micromachining system for flexible and rapid tissue-on-chip&#13;
fabrication.</title>
<link>https://hdl.handle.net/1721.1/157161</link>
<description>Design, simulation, and testing of a low cost laser&#13;
micromachining system for flexible and rapid tissue-on-chip&#13;
fabrication.
Nin, Jorge A.
This study introduces a novel approach to tissue-on-chip device fabrication using low-cost picosecond laser ablation, addressing critical limitations in current manufacturing methods such as soft lithography, particularly in terms of material compatibility, feature resolution, and scalability. We developed a comprehensive finite element method (FEM) model for the laser ablation process, incorporating key physical phenomena including laser-material interactions, heat transfer, and material removal dynamics. This model, validated against experimental results, accurately predicts ablation depths within 20% of measured values across a range of laser parameters. Our experimental setup, utilizing a cost-effective 10 kHz picosecond laser system, demonstrates superior capabilities in creating high-aspect-ratio microchannels exceeding 20:1, surpassing traditional manufacturing techniques. We achieve precise control over channel dimensions, with widths ranging from 20 to 500 micrometers and depths up to 1 mm, while maintaining sub-micron surface roughness (Ra &lt; 0.8 &#120583;m). The system’s versatility is showcased through the fabrication of complex structures such as Tesla valves and high-resolution text features, with a minimum feature size of 20 &#120583;m. We present practical techniques for component selection and process parameter optimization 3 using our simulation, reducing expensive and time-consuming experimentation. This work establishes low-cost picosecond laser ablation as a viable and advantageous method for tissue-on-chip manufacturing. With fabrication times of 6-8 minutes for small features and less than an hour for a full chip, our method represents a significant advancement in rapid prototyping capabilities. These findings demonstrate that laser ablation is a powerful technique for manufacturing tissueon-chip devices, offering high resolution, flexibility, and scalability. This approach has the potential to overcome the limitations of traditional methods, enabling the next generation of sophisticated, physiologically relevant in vitro models for biomedical research and drug development. The successful development and validation of the FEM model, coupled with practical demonstrations, provide a solid foundation for further advancements in laser-based fabrication of tissue-on-chip devices, potentially accelerating drug discovery processes and enabling more accessible production of personalized medicine platforms.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157161</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Design Study Using Simulation Techniques in Roll Form&#13;
Production</title>
<link>https://hdl.handle.net/1721.1/157160</link>
<description>A Design Study Using Simulation Techniques in Roll Form&#13;
Production
Lee, Joo Won
Sheet metal roll forming is a continuous bending process where metal strips are fed through a sequence of rolls to achieve a specific cross-sectional profile. This method is vital in the automotive industry for producing high-strength, lightweight components with precision, consistency, and cost-efficiency. This project focuses on optimizing Novelis’s aluminum roll forming process using Computer-Aided Engineering (CAE) techniques, including UBECO Profil, AutoCAD, and Finite Element Analysis (FEA) tools such as Ansys and LS-Dyna. Initial simulations on a square tube profile were key in identifying critical stations, leading to performance improvements through targeted adjustments. Stress and strain analyses revealed how operational factors, such as roll adjustments, affect the section shapes and angles, facilitating the refinement of roll forming station settings. With a Design of Experiment (DOE) framework, the study identified key variables to enhance simulation output accuracy and optimize roll forming settings. The team successfully built a digital twin of the new roll forming line, which accurately predicted the final product's geometry and provided precise recommendations for machine settings to achieve the desired shape. Novelis can apply these insights to enhance their software, thereby potentially increasing production efficiency. This approach not only supports current operations but also lays the foundation for future research and development advancements.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157160</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affordable Fiber Extrusion Device for Educational&#13;
Purposes: Design Improvements, Controls Development,&#13;
and Manufacturing Scale-up</title>
<link>https://hdl.handle.net/1721.1/157159</link>
<description>Affordable Fiber Extrusion Device for Educational&#13;
Purposes: Design Improvements, Controls Development,&#13;
and Manufacturing Scale-up
Zhang, Yiqian
The Fiber Extrusion Device (FrED) is an affordable desktop tool intended for engineering education. It mimics the fiber draw process, allowing students to study topics such as data acquisition, control systems, computer vision, data analytics, and smart manufacturing. As an educational tool, the goal of the device is to replicate the practical laboratory experience in remote learning scenarios. FrED has gone through multiple iterations, yet several outstanding issues remain. Building on the 2023 team’s progress, the 2024 project objectives include refining the design, developing controls, scaling up manufacturing, designing the assembly line, managing inventory, creating educational content, and conducting user testing and pilot runs. This thesis specifically details the author’s contributions to enhancing mechanical designs, advancing control systems, increasing production capacity, and planning educational materials. Mechanical components in the frame, the cooling system, and the diameter measurement system were redesigned to improve stiffness and stability. Local PID controllers were implemented for the DC motor and heater, effectively closing the feedback loop for fiber diameter control. The production target of manufacturing 35 FrED units was successfully achieved within the planned timeframe, with the packaging design optimized for efficient shipping. Additionally, an assembly manual, a graphical user interface, and control activities were developed as part of the educational content. Three user testing sessions were conducted to gather feedback.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157159</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technology Performance Curves to Inform Government and Private Investment</title>
<link>https://hdl.handle.net/1721.1/157158</link>
<description>Technology Performance Curves to Inform Government and Private Investment
Roberts, Matthew R.
Forecasts of technological progress are used to inform decisions in the public and private sectors that shape the modern technology landscape on a global scale. Technology performance curves are the quantitative, model-based representations of technological change employed in industrial, economic, and integrated assessment models to inform decision-making processes. Technology performance curves have evolved from their origins in the 1920s modeling of airframe manufacturing labor cost to consider mechanisms of technological progress, including learning-by-doing, learning-by-searching, economies of scale, and exogenous improvement. Examining changes to the performance and prevalence of technologies can provide insight that is relevant for product strategy and market forecasts. This knowledge can also help estimate the potential impact of government market policy and funding for research and development. This thesis seeks to consolidate the available literature on the various models of technology performance curves into a conceptual framework that can be used to understand the features and limitations of models, and their potential use cases.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157158</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Government Policies in Middle Eastern&#13;
Countries on Digital Platform Startups</title>
<link>https://hdl.handle.net/1721.1/157157</link>
<description>The Impact of Government Policies in Middle Eastern&#13;
Countries on Digital Platform Startups
Ali Osman, Mohamed Mamdouh
In the last decade, the financial sector has changed significantly. The introduction of new technologies and mobile applications transformed the entire industry, leading to the rise of financial technology (fintech startups). Fintech startups offer a wide range of products/services, such as digital payments, Buy Now, Pay Later (BNPL), crowdfunding, peer-to-peer lending, etc. Middle East and North African (MENA) countries have seen significant growth in the number of fintech startups and the total investment value in these companies. For example, in Egypt, Fawry is the biggest payment service provider; it covers nearly 25% of Egyptian customers and has more than 3 million daily operations. Also, some fintech companies in MENA became unicorns, such as Tabby of Saudi Arabia and MNT-Halan of Egypt. The increased penetration of fintech in MENA countries has consistently raised concerns about data security, consumer protection, and financial stability that these companies can cause. This always raised a couple of questions for the financial sector authorities or regulators: how these authorities can increase the number of these companies to support financial inclusion and growth of financial sectors and, at the same time, alleviate the dangers and concerns that these fintech companies present. This thesis provides a comprehensive analysis of the growth of fintech startups in the MENA region, focusing on four countries: Egypt, Saudi Arabia, UAE, and Jordan. Then, the study investigates the fintech regulations in these countries. This study aims to understand how recent regulations have impacted the growth of fintech startups through qualitative insights and case studies from four countries. The study reveals the following: First, Jordan's fintech regulations are still in their early stages. Despite having some fintech regulations, significant regulations such as data protection and cyber security laws still need to be made available. The absence of some fintech regulations might cause investors and entrepreneurs not to launch or expand their fintech businesses in Jordan. Second, in Egypt, the fintech regulations align with investors' and entrepreneurs' expectations; however, the economic conditions-budget deficit and currency fluctuations might hinder the growth of the fintech sector in Egypt. Third, for Saudi and UAE, the fintech ecosystem and regulations encouraged entrepreneurs to start and grow their businesses and customers to increase the adoption of fintech products and services. The development of regulations, laws, and guidelines in both countries contributed to the growth of the fintech sector and, at the same time, safeguard customers.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157157</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cognitive Underpinnings of Legal Complexity</title>
<link>https://hdl.handle.net/1721.1/157156</link>
<description>The Cognitive Underpinnings of Legal Complexity
Martínez, Eric
Across modern civilization, societal norms and rules are codified and communicated largely in the form of written laws. Although principles of communicative efficiency and legal doctrine dictate that laws be comprehensible to the common world, legal documents have long been attested to be incomprehensible to those who are required to comply with them (i.e. everyone). Why? This thesis investigates this question using the tools of cognitive science. Chapter II approaches the question from the comprehender side, documenting the cognitive and linguistic factors that make legal documents difficult to understand for non-lawyers. Corpus analyses reveal that legal contracts are laden with psycholinguistically complex structures at a strikingly higher rate than nine baseline genres of English. Experimental evidence further reveals that some of these structures, such as center-embedded syntax, inhibit recall and comprehension of legal content more than others, suggesting that difficulties in understanding legal content result largely from working-memory limitations imposed by long-distance syntactic dependencies as opposed to a mere lack of specialized legal knowledge. Chapter III extends these results to other legal genres and investigates the cognitive and linguistic profile of law over time. Analyzing every law passed by congress between 1951 and 2022 with matched texts from four different genres, we find that laws have and continue to be disproportionately laden with psycholinguistically complex structures relative to baseline genres of English, suggesting that top-down efforts to simplify legal texts over this period have largely failed. 3 Chapters IV and V turn to the producer side, investigating why legal actors write in a complex manner in the first place. We find that lawyers likewise struggle to recall and comprehend legal content drafted in a complex register and prefer simplified legal documents to complex documents across virtually every dimension. We further find that people tasked with writing official laws write in a more convoluted manner than when tasked with writing unofficial legal texts of equivalent conceptual complexity, whereas people editing a legal document do not write in a more convoluted manner than when writing from scratch. From a cognitive perspective, these results suggest law to be a rare exception to the general tendency in human language towards communicative efficiency. In particular, these results indicate law’s complexity to be derived from its performativity, whereby low-frequency structures may be inserted to signal law’s authoritative, world-state-altering nature, at the cost of increased processing demands on readers. From a legal perspective, these findings call into question the coherence and legitimacy of legal theories and principles whose validity rests on the notion of law being comprehensible to laypeople, such as ordinary meaning, fair notice, and modern variants of textualism. From a policy perspective, this work informs long-standing efforts to simplify legal documents for the public at-large, which, despite bipartisan support, have remained largely intractable. Finally, from a field-building perspective, this thesis lays the foundation for a broader interdisciplinary research program that uses insights from cognitive science to inform long-standing and cutting-edge questions of legal doctrine and policy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157156</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The development and application of mass-spectrometry-based tools to monitor proteome remodeling in microbes</title>
<link>https://hdl.handle.net/1721.1/157155</link>
<description>The development and application of mass-spectrometry-based tools to monitor proteome remodeling in microbes
Telusma, Bertina
Outside of controlled laboratory environments, cells are continually sensing and adapting to highly variable environmental conditions in an effort to maintain cellular homeostasis and to maximize fitness in each condition. Although specific stresses elicit distinct cellular responses, the reshaping of the proteome is a central element in most cellular adaptation. This dynamic proteome remodeling involves a highly orchestrated combination of regulated proteins synthesis, degradation, and modification, each contributing to the overall goal of matching the capacity of the expressed proteome to the demands of the sensed environment. Although each pathway will contribute, ultimately, whether cells mount a response primarily driven by synthesis or by degradation hinges on the nature and duration of the stress, as well as the cell type involved. Understanding the balance of these contributions has historically been challenging. As such, there is a need for approaches that can quantitatively resolve the contributions of protein synthesis and protein degradation pathways in a wide array of cellular and environmental contexts.&#13;
Quantitative proteomics via mass spectrometry stands out as a powerful tool for deciphering these questions, as it allows one to simultaneously monitor thousands of proteins. In this work, I leverage the power of quantitative proteomics coupled with metabolic labeling to investigate how microbes remodel their proteome during cellular adaptation. In chapter 2, I describe the development and characterization of these proteomic methods, including a detailed analysis of the variety of metabolic labeling schemes that can be employed in budding yeasts, which facilitate the bulk of my thesis work. In chapters 3 and 4, I apply these methods to the methylotrophic yeast, Komagataella phaffii, which grows robustly on a diverse set of carbon sources. As such, I use K. phaffii as a key case study to explore questions of cellular adaptation. I find that the K. phaffii expressed proteome varies greatly between cells grown in methanol, oleate, or glucose and, interestingly, that proteome remodeling strategies vary in a context-dependent manner. Specifically, I find that autophagic degradation drives proteome remodeling under nitrogen starvation conditions, with selective autophagic degradation of peroxisome supporting the cells transition from methanol media to glucose media. In contrast, I uncover that synthesis and growth-coupled dilution is the primary driver as K. phaffii adapts from methanol media to oleate media. Given the deep proteome coverage enabled by my methods, and my application of these methods in a wide variety of genetic backgrounds (6) and environmental conditions (5), these datasets also serve as a rich resource to identify conditions stimulating degradation of specific proteins, as well as the genetically defined pathways supporting these activities. Finally, in appendices 1 and 2, I highlight how these approaches can be applied across different microbial species to broadly characterize the proteomic consequences of nutrient and genetic perturbations. Overall, my work highlights how the development and application of powerful quantitative methods provide a global view of how proteome remodeling supports cellular adaptation, reveal deeper insights into pathways supporting turnover of specific proteins, and help to identify potential therapeutic targets to ameliorate protein-turnover related diseases.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157155</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational design of a novel soft-Xray-based turbulence diagnostic in NSTX-U</title>
<link>https://hdl.handle.net/1721.1/157154</link>
<description>Computational design of a novel soft-Xray-based turbulence diagnostic in NSTX-U
Chen, Xiang
Turbulence transport poses a significant challenge in fusion research. The measurement of turbulent fluctuations is critical for comprehending turbulence transport, predicting its behavior, and ultimately controlling it to maximize fusion gain. However, there is a notable scarcity electron temperature fluctuation diagnostics, including for high density tokamak plasmas and in spherical tokamaks. The ultimate aim of our research is to develop a novel diagnostic tool for temperature fluctuations. Before experimental exploration, conducting a numerical feasibility study is essential for the proposed diagnostic. The high spatial and temporal resolutions that are attainable using Soft X-ray (SXR) imaging makes it a promising candidate. The primary objective of the thesis is to assess the feasibility of an electron temperature fluctuation diagnostic based on SXR imaging.&#13;
&#13;
The feasibility study involves gathering fluctuation data and constructing a numerical diagnostic model. This model computes synthetic SXR measurements, which are then reconstructed using tomographic algorithms to derive electron temperature fluctuations. These reconstructions are then compared against the ground truth to assess diagnostic performance. Optimization of performance is achieved by adjusting diagnostic parameters to identify the optimal set for feasibility analysis.&#13;
&#13;
This study consists of two primary parts. First, we utilize a simplified toy model with circular plasma geometry and synthetic fluctuation data abstracted from gyrokinetic simulation fluctuation spectra and we employ a pseudolocal tomography algorithm for reconstruction and demonstrate reliable measurement of electron temperature fluctuations for X-ray detectors with sufficiently high signal-to-noise ratio. Second, we conduct a more comprehensive feasibility study using fluctuation data directly generated from gyrokinetic simulations, in a real (spherical tokamak) NSTX-U configuration with complex plasma geometry. Assumptions from the previous study, such as infinitely thin beam size, are relaxed to assess their impact on reconstruction. Additionally, we enhance the reconstruction algorithm using neural networks, enabling reconstruction of both electron density and temperature fluctuations, as well as cross-phase analysis. Overall, the study confirms the feasibility of the SXR diagnostic given that SXR detectors meet minimum requirements. Furthermore, we explore fluctuations generated from different gyrokinetic simulations, demonstrating the diagnostic's ability to differentiate fluctuations originating from different instabilities under the same configuration.&#13;
&#13;
This research provides a theoretical foundation and guidance for developing a practical SXR-based electron temperature fluctuation diagnostic for experimental use. It outlines the measurable quantities, their limitations, and the minimum requirements for SXR hardware to ensure reliable measurements. This contribution significantly advances our understanding of plasma turbulence transport, addressing a key challenge in fusion research. However, the current study's limitations employ a simplified emissivity model. Utilizing a more comprehensive model incorporating atomic data could yield more robust conclusions. Additionally, incorporating real hardware parameters would enhance the reliability of the conclusion.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157154</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Sandbar Effects on Nearshore Waves and Morphological Change using SWAN</title>
<link>https://hdl.handle.net/1721.1/157153</link>
<description>Modeling Sandbar Effects on Nearshore Waves and Morphological Change using SWAN
Murman, Charles E.
Numerical model simulations (Delft3D SWAN) are used to examine the impact of small alongshore variations in the bathymetry of an outer sandbar (in about 5-m water depth) on the nearshore wave field as the shallow (&lt; 3 m) bathymetry changes from near alongshore uniform to strongly spatially variable to understand wave driven morphologic evolution. Waves were observed at Duck, NC with an array of 14 pressure gages between 1- and 3-m water depth spread over 250 meters alongshore. Bathymetry was measured between the dune toe and about 8-m water depth on September 26 and October 2, 2013. The bathymetry evolved from roughly alongshore uniform on September 26 to strongly alongshore variable on October 2. Between these dates incident significant wave heights ranged from 0.5 meters to 2.3 meters, with incident angles from 20 degrees north to 5 degrees south of shore normal. Simulations were run with observed bathymetry for both the outer bar and inner shallow bathymetry, with smoothed outer bar and observed shallow bathymetry, and with digital elevation model bathymetry to determine the effects of outer bar and shallow bathymetry on wave evolution.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157153</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Illusion of Wetness: Cold Dry Stimuli in Sensory Perception</title>
<link>https://hdl.handle.net/1721.1/157152</link>
<description>Investigating the Illusion of Wetness: Cold Dry Stimuli in Sensory Perception
Ozor-Ilo, Ozioma
Humans lack specialized receptors for perceiving wetness and so it is a compound sensation based on changes in skin temperature and contact pressure that are sensed by thermoreceptors and mechanoreceptors in the skin. In addition to perceiving the wetness of damp fabrics in contact with the skin or the presence of sweat on the skin, humans can perceive wetness in the absence of any moisture, a phenomenon known as illusory wetness. The illusion has been shown to arise when the skin is in contact with a surface and is cooled.   This thesis is focused on understanding the variables that contribute to illusory wetness by first determining the difference threshold for perceiving the rate of skin cooling and relating this to perceived wetness. The results from the first two experiments showed that the difference threshold averaged 0.9 °C/s -1.06 °C/s at a reference value of 0. 5 °C/s. For perceiving wetness, the threshold averaged 1.08 °C/s - 1.41 °C/s. The latter finding indicates that the rate the skin cools exceeds some threshold value before it is perceived as being wet. A third experiment explored the role of temperature and surface material in the perception of illusory wetness. The results showed that temperature was the more critical valuable, with ratings of perceived wetness increasing as the temperature decreased further below the baseline skin temperature. These experiments have demonstrated the effect that rates of cooling have on perceiving illusory wetness and have contributed to a better understanding of the role of surface material and temperature on perceiving wetness during static contact. These findings are relevant to simulating wetness in prosthetic devices and virtual reality environments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157152</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Process Replacement on Sheet Metal&#13;
Product Design: The Use of Steel Extrusions Versus&#13;
Formed Sheet Metal</title>
<link>https://hdl.handle.net/1721.1/157151</link>
<description>The Impact of Process Replacement on Sheet Metal&#13;
Product Design: The Use of Steel Extrusions Versus&#13;
Formed Sheet Metal
Yuan, Chenyu
The sheet metal manufacturing industry, with its rich history and legacy, continues to seek innovative methods to enhance automation and reduce costs in an increasingly competitive market. Design for Manufacturability &amp; Assembly (DFMA) has emerged as a strategy to simplify product designs, thereby improving manufacturing eOiciency and reducing production costs. This research suggests the use of extruded steel profiles as an alternative to traditional sheet metal components that pose challenges for automation, particularly heavy gauge narrow channels. Additionally, it advocates for replacing manual press brake operations with advanced automated tube laser technology. The proposed shift not only simplifies the manufacturing process but also aligns with the broader goal of global cost reduction and process standardization, which are essential for enhancing New Product Introduction (NPI) eOiciencies. The findings demonstrate that maximizing the application of tube laser technology across a diverse range of channels and products can lead to significant cost savings, ranging from 49% to 79%, with a payback period of less than two years. Even under fluctuating raw material prices, the tube laser method remains economically advantageous. Moreover, redesigning products to enhance compatibility with tube laser technology has shown to increase the automation compatibility of an example product to 100%, underscoring the importance of incorporating DFMA principles from the early stages of product design.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157151</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Roll Form Bending Processes through Experimentation and Informed Predictive Analysis: A Strategic Approach to Optimize Tooling</title>
<link>https://hdl.handle.net/1721.1/157150</link>
<description>Enhancing Roll Form Bending Processes through Experimentation and Informed Predictive Analysis: A Strategic Approach to Optimize Tooling
Kompella, Sarvagnya
Sheet metal roll forming is a continuous bending process where metal strips pass through a series of rolls to achieve a specific cross-sectional profile. This technique is crucial in the automotive industry for producing high-strength, lightweight components with precision, consistency, and cost-effectiveness. This project aims to optimize Novelis’s aluminum roll forming process by employing Computer-Aided Engineering (CAE) tools, including UBECO Profil, AutoCAD, and Finite Element Analysis (FEA) software such as LS-DYNA. Initial simulations of a square tube profile identified key stations and led to performance enhancements through targeted adjustments. Stress and strain analyses demonstrated how operational factors, such as roll settings, influence section shapes and angles, facilitating the fine-tuning of roll forming station parameters. Using a Design of Experiments (DOE) framework, the study pinpointed critical factors to improve simulation accuracy and optimize roll forming settings. The results indicated that optimized stand height settings significantly improved the accuracy of the desired angles. These insights can be integrated within Novelis’ production line to boost production efficiency and roll performance. This research not only supports current operations, but also provides a foundation for future advancements in roll forming technology.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157150</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Refining Hardware of Desktop Fiber Extrusion Devices&#13;
for Affordable Manufacturing and Novel Fiber Prototyping</title>
<link>https://hdl.handle.net/1721.1/157149</link>
<description>Refining Hardware of Desktop Fiber Extrusion Devices&#13;
for Affordable Manufacturing and Novel Fiber Prototyping
Glasser, Kaili
The Fiber Extrusion Device (FrED) is a hands-on desktop tool designed to facilitate the teaching of manufacturing engineering concepts through remote laboratory experiences. FrED simulates the continuous fiber draw process used in various industries, including fiber optics, synthetic textiles, medical devices, aerospace, and construction. This device translates industrial-scale fiber draw towers into a compact version, allowing users to experiment with different parameters to understand their effects on manufacturing processes. Over the past three years, successive groups of MEng students have refined FrED’s design with the goal of creating a robust, functional, and affordable device for in-house manufacturing at the MIT FrED Factory. While the 2023 model achieved significant cost reduction, it required further hardware and electronics refinement for stable and repeatable performance. This thesis encompasses two main objectives: enhancing the hardware design and assembly processes for the final 2024 educational FrED model, and developing an alternative design for an advanced FrED version suitable for academic lab settings to rapidly prototype synthetic fibers. The first objective was met by improving the two most dynamic sub-assemblies—the gearbox and extrusion system—to ensure smooth and consistent operation. Additionally, conjoining part tolerances and hardware insert locations and geometries within manufactured parts were verified and adjusted according to manufacturing standards. Multiple jigs were also designed and fabricated to facilitate the assembly process of the gearbox and extrusion sub-assemblies, and two new parts were created to enhance user operation of FrED. For the second objective, an enhanced version of FrED capable of handling a wider range of preform materials was developed by upgrading the extrusion sub-assembly to operate at temperatures over three times higher than the educational version. This feature had been previously attempted with older, more expensive versions of FrED but had not been pursued with the recent, more affordable iteration. The new high-temperature FrED successfully drew fibers from PLA, a biodegradable thermoplastic, using 3D printed preforms with distinctive geometries, demonstrating its potential for providing an affordable solution for rapid synthetic fiber prototyping in academic labs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157149</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Internet Celebrity City: Social Media and Urban Space in China</title>
<link>https://hdl.handle.net/1721.1/157148</link>
<description>Exploring the Internet Celebrity City: Social Media and Urban Space in China
Chen, Yufei
“Internet celebrity space” offers a fresh perspective for studying urban spaces in the mobile Internet era as a new visual consumption space. The term "Internet celebrity," or wanghong in Chinese, is utilized in modern Chinese media to refer to celebrities and the specific cultural and consumption trends linked to them. This concept has surfaced alongside the growth of e-commerce platforms, with the recognition that wanghong often engages in promoting products, services, or lifestyles to their followers. The internet celebrity spaces, or wanghong spaces, can elevate the popularity of certain areas and influence local neighborhoods, communities, and economies. Internet celebrity urbanism involves broadening this trend from certain locations to greater scales, encompassing entire districts or extending this status through urban scale. This thesis explores the impact of internet celebrity spaces in China. It is divided into three parts: Firstly, it demonstrates the phenomenon and background: study investigates the way Internet Celebrity spaces are represented in social media. Then, the studies focus on exploring the latest research and analyzing the research perspectives and methods to anchor the author’s research questions with appropriate approaches. Lastly, the influence of Internet Celebrity spaces is discussed through examining the case in Shanghai by observing internet celebrity spaces’ influence on street activity. With the analysis and conclusion, suggestions for future development are given.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157148</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Target Design and Optimizations for Spent Fuel Transmutation</title>
<link>https://hdl.handle.net/1721.1/157147</link>
<description>Target Design and Optimizations for Spent Fuel Transmutation
Tukharyan, Grigor
There are six long-lived fission products (LLFPs) identified in nuclear spent fuel, which account for at least 99% of the long-term radiotoxicity once actinide recycling is completed. This thesis examines the feasibility of using proton beams to transmute LLFPs into shorterlived or stable isotopes. While long-term storage for high-level waste would still be necessary, transmuting the LLFPs can reduce the volume of waste material that needs to be stored. The objectives of this research are to explore the design of a proton transmutation facility, as well as to determine the optimal LLFP target-blanket material configuration for maximizing the transmutation efficiency. This thesis analyzes the use of intermediate energy beams of 18-70 MeV from commercial cyclotrons for transmutation. This thesis also analyzes the use of 1000 MeV proton beams to generate a substantial number of secondary neutrons through spallation interactions with target materials. The secondary neutrons produced from the spallation process are utilized by the LLFP materials, while surrounding blanket materials are selected to enhance the transmutation efficiency. PHITS, a Monte Carlo transport code, is employed to computationally model the interactions between LLFP materials and the proton beam. In this thesis, PHITS is used to estimate the flux-energy spectrum and the number of atoms irradiated in the LLFP target during beam interaction. This data is then post-processed using a 0-dimensional analysis in FISPACT to estimate the transmutation rate for each LLFP. PHITS is also used to find the depletion rate of the LLFPs for the 18-70 MeV beam case and for spallation-induced transmutations in the 1000 MeV case. Geant4, a Monte Carlo transport toolkit, is used to calculate the production rate of particles attributed to the spallation process. Analysis of the performance of commercial cyclotrons with energies of 18-70 MeV indicates that transmutation rates increase with higher proton beam energy. A cyclotron with a beam current of 10 mA and beam energy of 70 MeV running continuously can transmute 15.401 ± 0.069 g/year of Tc-99. However, Tc-99 is produced at a rate of approximately 8.54 kg/year in a 1 GW reactor, suggesting that a single commercial cyclotron beam is currently not viable for transmutation purposes. A proposed tank design with a lead/Tc-99 target that is surrounded by LLFP pins and heavy water is considered for the spallation study. Although using Tc-99 as a target directly transmutes 0.893 ± 0.002 kg/year from transmutation attributed to spallation, using lead as a target instead approximately doubles the transmutation rates in the LLFP regions for almost all of the LLFP isotopes. In both cases, the depletion rate of the LLFPs is greatly increased compared to using a commercial cyclotron of 70 MeV. A proton spallation source 3 with a beam current of 10 mA and beam energy of 1000 MeV, using a Tc-99 target, achieves a transmutation rate of approximately 10.9 kg/year of Tc-99 in the LLFP pins through secondary neutrons produced by the spallation process. In contrast, using a lead target achieves a higher transmutation rate of around 20.0 kg/year of Tc-99 in the LLFP pins. This work was supported by the DOE ARPA-E Project under the award number DEAR0001578.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157147</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light Water Reactor Loading Pattern Optimization with Reinforcement Learning Algorithms</title>
<link>https://hdl.handle.net/1721.1/157146</link>
<description>Light Water Reactor Loading Pattern Optimization with Reinforcement Learning Algorithms
Seurin, Paul R.M.
In 2023, Commercial Nuclear Power Plants (NPPs) in the USA, comprising Light Water Reactors (LWRs) such as Pressurized Water Reactors (PWRs) and Boiling Water Reactors (BWRs), remained the largest single source of carbon-free energy. They provided approximately half of the nation’s carbon-free electricity and under 20% of total electricity throughout the year. Ensuring the competitiveness of these nuclear assets is crucial for maintaining their role in providing dispatchable clean energy alongside renewable sources. The recent commissioning of Vogtle Units 3 and 4 marked the first new NPPs connected to the grid in over three decades, highlighting the high costs associated with nuclear technology and underscoring the need to improve their economic competitiveness. Optimizing the fuel cycle economics through enhanced core Loading Pattern (LP) is a key strategy to address this challenge. Since the 1960s, optimizing the LP for LWRs has been a major focus in nuclear engineering, but the large search space has posed significant difficulties. Computational methods from Stochastic Optimization (SO) have been used to tackle this issue, yet they often fail to outperform expert-designed solutions preferred by utilities. Deep Reinforcement Learning (RL), a subset of Deep Learning focused on decision-making, has shown promise in surpassing human-expert solutions in fields such as gaming and robotics. This thesis investigates the use of RL to improve automated tools for solving the PWR LP optimization problem, with the goal of developing efficient decision-support tools for core designers to generate more economical loading patterns. We present a novel approach using deep RL to solve the LP problem and compare it with traditional SO-based methods. Our findings indicate that the LP problem benefits from a global search to rapidly identify promising directions, followed by a local search to efficiently exploit these directions and avoid local optima. Proximal Policy Optimization (PPO), a type of RL algorithm, adapts its search capabilities with learnable policy weights, making it effective for both global and local searches, which contributes to its superiority over SO-based methods. Additionally, we introduce a new method called PEARL (Pareto Envelope Augmented with 3 Reinforcement Learning) to tackle multi-objective optimization challenges. PEARL demonstrates greater efficiency in identifying Pareto fronts without requiring additional designer intervention, compared to traditional single-objective scaling methods. Finally, we extend PEARL to a novel paradigm called physics-informed RL by integrating statistical techniques and physics knowledge to enhance algorithm performance. As problem complexity increases, classical methods sometimes fail to find feasible solutions. Incorporating physics-informed insights becomes crucial for discovering high-quality and diverse solutions more efficiently. These results highlight the potential of AI advancements in the nuclear field. A deep understanding of AI tools is essential to fully leverage their capabilities. Our approach achieved a cumulative benefit of over 4 $million per year per plant compared to using off-the-shelf AI solutions. While further work is needed to translate these theoretical benefits into real reactors, these algorithms promise to enhance the competitiveness of future nuclear fleets. In doing so, they could make a substantial contribution to achieving carbon neutrality by increasing the amount of clean electricity on the grid.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157146</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Singular Value Decomposition Through&#13;
Least Squares</title>
<link>https://hdl.handle.net/1721.1/157145</link>
<description>Distributed Singular Value Decomposition Through&#13;
Least Squares
Zhao, Freddie
Singular value decomposition (SVD) is an essential matrix factorization technique that decomposes a matrix into singular values and corresponding singular vectors that form orthonormal bases. SVD has wide-ranging applications from principal component analysis (PCA) to matrix completion and approximation. Methods for computing the SVD of a matrix are extensive and involve optimization algorithms with some theoretical guarantees, though many of these techniques are not scalable in nature. We show the efficacy of a distributed stochastic gradient descent algorithm by implementing parallelized alternating least squares and prove theoretical guarantees for its convergence and empirical results, which allow for the development of a simple framework for solving SVD in a correct, scalable, and easily optimizable manner.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157145</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomy Work: Personhood, Expertise, and Activism of Disabled AI Data Workers in China</title>
<link>https://hdl.handle.net/1721.1/157144</link>
<description>Autonomy Work: Personhood, Expertise, and Activism of Disabled AI Data Workers in China
Wu, Di
This dissertation examines the labor and life of disabled workers in China’s artificial intelligence (AI) data annotation programs. The study draws on 14 months of ethnographic fieldwork, conducted over three years, with disabled activists, disabled workers, employment advocates, tech company staff, and government officials. This is supplemented by five years of my professional experience in disability nonprofits. My primary field site was a disabled people-led NGO founded in 2006, which I refer to as ENABLE. In recent years, ENABLE has developed numerous projects with tech companies to hire people with visual and physical impairments as data annotators for AI systems and to design assistive technologies for the community.&#13;
&#13;
In ENABLE’s case, what appears to be a familiar story of capitalist exploitation of disabled people turns out to be, instead, a story about the struggles of disabled Chinese people over different ways of being, living, and relating. I use the term “autonomy work” to describe disabled people’s labor to make “autonomous” machines (zidonghua) (Chapter 1), build an “autonomous” life (zizhu shenghuo) through work (Chapters 2 &amp; 3), and design tools for “independent” navigation (duli chuxing) (Chapter 4).&#13;
&#13;
I argue that disabled activists seek to construct greater autonomy for their community by reconfiguring social relations in and around technology. I call this mechanism “rerouting.” Instead of a complete departure from asymmetrical power relations, my interlocutors “reroute” the pathways between different human and non-human nodes without changing the nodes per se. They do so within the sociotechnical systems they build, the technological institutions they navigate, the kinship structures they seek to remake through tech work, and the physical terrain they navigate with assistive devices, all in pursuit of multiple forms of autonomy. “Rerouting” contributes to the rich scholarship on the intersection of disability and technoscience by highlighting the effects of disabled people’s unorthodox knowledge and practices that bend the world towards disabled bodies and minds. Furthermore, it specifies a key mechanism through which these effects are realized. Disabled people hack lives, build access, and improvise affordances by reorganizing the pathways between objects, bodies, and environments that were originally designed with other intentions.&#13;
&#13;
With deep knowledge and lived experience of the social issues they advocate for, disabled activists in China approach technology as a puzzle piece, not a magic bullet. They make technology useful for their lives, work, and activism by returning the technical to the social. Rather than displacing the slow work of social movements with neoliberal techno-solutionism, I show that this community-driven technological engagement is part of a larger effort to sustain that very slow work within a shifting political environment.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157144</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Survey Techniques to Examine Morphological Evolution of Coastal Regions</title>
<link>https://hdl.handle.net/1721.1/157143</link>
<description>Survey Techniques to Examine Morphological Evolution of Coastal Regions
Ammons, Seth N.
Beaches are dynamic, changing with tides, winds, and waves. Here, a beach was mapped daily for 3 weeks from the dune to the low-tide water line on the Outer Banks of North Carolina at the US Army Corps of Engineers Field Research Facility in Duck. The 22,500 m2 area of interest was surveyed daily by a walker carrying a GPS-equipped backpack and occasionally with a lidar equipped drone. Surveys of the northern region of interest also were collected with a stationary terrestrial lidar mounted on the dune. The observed morphological events include the destruction and formation of a cusp field during which there was 1.4 m of erosion and accretion associated with bays and horns, and the formation over 7 days of a ~1-m high ridge and runnel system. The GPS-equipped backpack apparatus was used as ground truth for estimates made with the lidar systems. Along both cross- and alongshore transects the lidar elevations were within approximately 0.05 m of those estimated by the backpack surveys, with RMS errors less than 0.11 m.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157143</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge and the City: Redefining Islamic Urbanism, 762–1067</title>
<link>https://hdl.handle.net/1721.1/157142</link>
<description>Knowledge and the City: Redefining Islamic Urbanism, 762–1067
Lesoon, Courtney
This study demonstrates that the rapid urbanization of the Islamic world in its first five centuries can be attributed in part to the development of an independent class of city administrators who ensured that urban life thrived even in the most tumultuous of political times. This dissertation subverts existing historical models of urbanism, which were developed for medieval Europe, by excavating a theorization of the city from the political writings of the philosopher al-Farabi (d. 950), who argues that cities require the administrative wisdom of learned men trained in law. To historically corroborate al-Farabi’s theory, which has been cast as utopian, I identify these learned men in the historical record as the ʿulamaʾ. I demonstrate that early Islamic learning was a complex but ordered system—even before its institutionalization—first by articulating its delineations via a praxis of personally conferring and acquiring ʿilm (knowledge). This praxis was, I demonstrate, informed by a widely held view that ʿilm was metaphysically substantiated. The ʿulamaʾ—those marked by ʿilm—inherited their legal authority from the Prophet via the transmission of hadith and thus did not rely entirely on the political vesting of the caliph or amir to carry out Islamic law on the level of the city. I demonstrate that the ʿulamaʾ, with their independent legal authority, served as city administrators via two primary positions—the qadi (judge) and the muḥtasib (officer of public order)—and various other positions delegated by these two offices. Just as the system of early Islamic learning was regularized across the Islamic world, so too was the administration of cities by the ʿulamaʾ. Through city administration, the ʿulamaʾcultivated favorable living conditions in cities. Their relative independence from the state allowed for a continuity in city administration—and thus a continuity in urbanism—that survived the many political upheavals that came to define the Islamic world in the tenth and eleventh centuries.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157142</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Architectures of Microbiality: From Diatoms to Diatom Houses</title>
<link>https://hdl.handle.net/1721.1/157141</link>
<description>Architectures of Microbiality: From Diatoms to Diatom Houses
Luo, Xuan
This dissertation examines the diatom—a microorganism found ubiquitously from oceans to kitchen countertops—as a resonant factor in understanding modernity. From the early 19th century to the mid-20th century, rapid advances in optical microscopy dramatically unveiled the microbial world, during which diatoms, owing to their astonishing biological properties, forms, and material possibilities, became the subject of wonderment by a diverse group of naturalists, artists, and architects. I contend that microbial ubiquity constitutes a crucial but often overlooked aspect of environmental history, which architecture has consistently failed to account for. &#13;
	&#13;
The narrative progresses from early fascination with the diatom’s aesthetic potential to geologists’ more serious-minded efforts, laying the foundations for Richard Neutra’s (1892–1970) insistence that the diatom was critical to a new type of modern architecture. Through the case studies of Marquis Panciatichi d’Aragona (1813–1897) and Jacob Whitman Bailey (1811–1857), among others, I explore how diatom-influenced works from the 19th to early 20th centuries underpinned a transboundary dialogue on the microbial and its architectural imaginations. This perusal historicizes the theoretical and technological conditions that made possible the emergence of modernist views of nature, epitomized by Neutra’s philosophy on environmental psychology and realism through his early 20th-century proposal to integrate diatoms into the very fabric of modern living spaces. &#13;
	&#13;
This dissertation examines diatoms within shifting epistemological frameworks, tracking their transition from scientific specimens to motifs in visual culture. It investigates how unseen natural elements were disseminated, accumulated, and manifested into distinctly perceivable forms, revealing that the understanding of diatoms expanded from isolated, object-focused studies to a concern for environmental relationships and geological transitions. Filling the world was not some grand narrative of human will but the disorienting, puzzling, and even frightful everywhere-ness of microbiality. The intersection of diatoms with architecture changed from aesthetics and form to a deeper engagement with sites and land, following the transformative reconception of the thickness of the earth’s surface. This dissertation reveals a condition of knowledge that architecturally and psychologically rewrote nature as an encounter of biological (un)consciousness and technological actualization.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157141</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systems Engineering for Carbon Capture and Storage</title>
<link>https://hdl.handle.net/1721.1/157140</link>
<description>Systems Engineering for Carbon Capture and Storage
Zhang, Tiantian
Carbon Capture and Storage (CCS) is a crucial technology in the mission to achieve NetZero carbon emissions by midcentury. By capturing and storing CO2 from large industrial sources and power plants, CCS mitigates the impact of existing industrial activities while maintaining energy security and economic stability. The study underscores the necessity of a systematic approach to CCS system design and development to meet stakeholder requirements. It highlights the versatility of CCS in addressing emissions across various sectors, its ability to be retrofitted to existing infrastructure, and its potential for immediate emissions reduction compared to the longer timelines required for integrating renewable energy sources.&#13;
This study analyzes CCS systems holistically, identifying primary components and alternative options for capture, transport, storage, and utilization. It reveals that the transport type significantly impacts system utility, with pipelines being the most effective. The analysis also indicates that CCS systems capturing CO2 from power plants, ammonia, and chemical production facilities and utilizing onshore pipelines and saline aquifers offer high utility and low cost. The Gulf Coast and Permian &amp; Midcontinent regions show better performance due to existing infrastructure and storage capacity. The study emphasizes the benefits of staged CCS development for broader deployment, technology maturation, and cost recovery. Sensitivity analyses suggest that future technology advances could further improve CCS system performance and economic viability.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157140</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redefining Urban Landscapes: A Methodological Approach to Transforming Underused Parking Spaces with Dynamic Urban Functions</title>
<link>https://hdl.handle.net/1721.1/157139</link>
<description>Redefining Urban Landscapes: A Methodological Approach to Transforming Underused Parking Spaces with Dynamic Urban Functions
Fan, Jie
This study presents an approach to identifying underutilized urban spaces, focusing on parking areas, and explores potential reutilization strategies in Greater Boston. Under the milieu of the information age, global urbanization, and technological development, the prosperity of urban data serves as the new method to approach urban proposals. The city, as a multifaceted artifact, is examined through the lens of advanced data-driven techniques, particularly deep learning. With the computer vision model, the underused surface parking lots will be automatically detected according to historical satellite imageries, highlighting a misalignment between the current infrastructure and the actual urban needs. This study then leverages miscellaneous urban factors to analyze the parking patterns. Associated with the multimodal system, there are possibilities underlying the usage of redundant surface parking. Considering the high rents and housing situation, these spaces could be transformed into housing units or even mixed-used districts, to alleviate the housing crisis in Greater Boston.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157139</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Capture to Storage: Understanding the Viability&#13;
and Challenges of Carbon Capture and Sequestration&#13;
Initiatives</title>
<link>https://hdl.handle.net/1721.1/157138</link>
<description>From Capture to Storage: Understanding the Viability&#13;
and Challenges of Carbon Capture and Sequestration&#13;
Initiatives
James, Lauren
This thesis explores the implementation of Carbon Capture and Sequestration (CCS) technologies, focusing on the stages of capture, transportation, and sequestration. Utilizing a system dynamics model, the research evaluates CCS's effectiveness and economic viability across various scenarios, including those outlined by the International Energy Agency (IEA). The baseline model suggests that even under favorable assumptions, CCS permanently sequesters only a small fraction of total global emissions.&#13;
&#13;
The economic analysis reveals a slight decrease in total costs, attributed to the learning curve, but offset by increasing costs as more complex projects are undertaken. The model also highlights the energy penalty associated with high energy requirements for capture. Additionally, the alignment of capacities across capture, transportation, and sequestration phases is important because discrepancies can lead to inefficiencies and bottlenecks.&#13;
&#13;
This research acknowledges limitations, including the use of aggregated data and assumptions across many parameters. These limitations emphasize the need for further research to refine these estimates and enhance the model's accuracy. Despite these challenges, the model serves as a beneficial tool for testing policy interventions and assessing the potential of CCS as a component of global climate strategy.&#13;
&#13;
Overall, the findings highlight the complexities and challenges of deploying CCS technologies at scale, emphasizing the importance of coordinated policy, technological innovation, and infrastructure development. This research provides a foundation for future studies and policy discussions to better understand CCS's role in achieving climate goals.&#13;
&#13;
Disclosure: The following content is the author’s, and responsibility is taken for all content. Noting this, it was generated by the author with the assistance of an AI-based system to&#13;
augment the effort.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157138</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics and Implications of ROS in Marine Systems</title>
<link>https://hdl.handle.net/1721.1/157137</link>
<description>Dynamics and Implications of ROS in Marine Systems
Taenzer, Lina
The reactive oxygen species (ROS), superoxide and hydrogen peroxide, play critical roles across diverse marine ecosystems, influencing redox chemistry and organismal health. The distribution and concentration of these compounds in the oceans may serve as important controls for various biogeochemical cycles. The contrasting physiological nature of ROS, serving as both integral compounds for cellular processes such as signaling and growth while inducing oxidative cell damage at elevated concentrations, has made interpretation of their roles in organismal and ecosystem health challenging. Despite the potential for these ROS to provide unique insights into the intricate interactions occurring at the interface between life and its surrounding environment, critical gaps in our understanding of these compounds in marine systems exist. In this thesis I explored two aspects of marine ROS. The first part is focused on advancing our understanding of the distribution of superoxide in the sea. As part of a multidisciplinary team, I developed a submersible chemiluminescent sensor (SOLARIS) capable of measuring ROS in situ to ocean depths greater than 4,000 meters. With the use of SOLARIS, I discovered that a broad diversity of sponges and corals are local hotspots of superoxide at depth. Then, I studied the distribution of superoxide in the stratified water column of the Baltic Sea and found large subsurface maxima in the aphotic zone. In the second part of this thesis, I probed the use of hydrogen peroxide as a monitoring agent of organismal health. I measured hydrogen peroxide and bromoform production by two seaweed species exposed to different stressors. An analysis of these signals suggests that hydrogen peroxide could serve as a non-invasive chemical signature for stress in seaweed meadows and farms. Lastly, I characterized hydrogen peroxide associated with different coral species during a Stony Coral Tissue Loss Disease transmission experiment. I determined that hydrogen peroxide does not predict infection before lesions are visible, thus hindering its utility as an early-stage signature of disease within corals. Altogether, this thesis extends our perspective on the distribution and controls on ROS in various marine systems and provides a baseline for using ROS dynamics to monitor organismal health.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157137</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Technoeconomic Analysis of Geothermal District Heating&#13;
in the Boston, MA area.</title>
<link>https://hdl.handle.net/1721.1/157136</link>
<description>Technoeconomic Analysis of Geothermal District Heating&#13;
in the Boston, MA area.
Estep, Joseph
This study conducts a comprehensive technoeconomic analysis of geothermal district heating (GDH) in the Boston, MA area, with a specific focus on the MIT campus. The research begins by reviewing the evolution of district energy systems, highlighting various use cases, technologies, and policy developments. It then defines the system problem and establishes a framework for implementing a geothermal district heating system at MIT. The analysis examines the economic viability and decarbonization potential of the GDH system, identifying various system architectures and phased campus sector implementation scenarios. These scenarios are compared to a 'business as usual' reference case. The study reveals that the recommended implementation scenario, MG-E-N-W, not only offers the lowest cost but also achieves the lowest emissions. Over a 30-year period, this scenario presents a net present value (NPV) savings of more than $700 million and 2 million MTCO2e compared to the reference case, making it the most economically and environmentally favorable option for MIT's campus energy system transformation.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157136</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the Cellular Landscape of the Brain: A Scalable Approach to Comprehensive Microscopy Data Analysis</title>
<link>https://hdl.handle.net/1721.1/157135</link>
<description>Mapping the Cellular Landscape of the Brain: A Scalable Approach to Comprehensive Microscopy Data Analysis
Kim, Minyoung E.
Recent advances in intact tissue processing and imaging have enabled the generation of whole brain microscopy data at subcellular resolution, revealing intricate morphological details of cells at unprecedented scales. Given that cellular morphology is strongly linked to distinct functional states of cells, in-depth morphological analysis of such data offers immense potential for understanding their roles in brain development and disease. However, the lack of scalable computational techniques poses a substantial challenge in achieving comprehensive morphological characterization. To efficiently and accurately analyze cellular morphology, we need to process terabyte-scale three-dimensional (3D) data, which inevitably complicates downstream analysis workflows with existing methods.&#13;
&#13;
To address the challenge, we developed an end-to-end scalable framework that seamlessly strings each step of the analysis pipeline together, enabling comprehensive fluorescence microscopy data analysis. The framework, termed MorPheT (Morphology Phenotyping Tool), serves as an all-in-one solution, offering a suite of analysis modules spanning from image pre-processing to precise cell detection, atlas alignment, morphological phenotyping, and interactive visualizations. MorPheT employs an ensemble method using both supervised and unsupervised approaches to maximize feature learning for unbiased morphological characterization. A novel deep neural network (ALNet) was designed to capture the long-range contextual dependencies inherent in 3D training data during supervised learning. Unsupervised learning leverages complementary features from the supervised approach, demonstrating the powerful synergy of this ensemble method.&#13;
&#13;
We applied MorPheT to two main projects. First, we profiled brain-resident macrophages (BRMs) and created the first fetal mouse brain atlases across multiple developmental stages, revealing distinct regional growth patterns of BRMs throughout development. We also demonstrated MorPheT’s effectiveness in characterizing microglia distribution patterns and morphological properties brain-wide in both control and neurodegeneration mouse brains. In the second project, we investigated cFos+ cells in a memory engram study, showcasing MorPheT’s utility for brain-wide analysis of engram cells. By examining regions hypothesized to hold memory engrams for contextual fear conditioning memory, we identified brain regions where engrams for a specific memory are distributed. Taken together, MorPheT is a powerful tool for cell profiling and mapping across the brain, and we anticipate it will help democratize computational analysis for large-scale microscopy datasets, making advanced analytical approaches more accessible to the broader scientific community.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157135</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compact Capabilities: Developing and Evaluating a Field-Portable Neutron Resonance Capture Analysis System</title>
<link>https://hdl.handle.net/1721.1/157134</link>
<description>Compact Capabilities: Developing and Evaluating a Field-Portable Neutron Resonance Capture Analysis System
Rahon, Jill M.
Technological advances in the thorium fuel cycle and other advanced reactor concepts suggest their possible commercialization for nuclear power use in the next ten years. Although the thorium cycle shares many aspects with the uranium and plutonium fuel cycles, it introduces the requirement for the nondestructive assay of multiple isotopes (²³8U, ²³²Th, ²³³U, ²³⁵U, or ²³⁹Pu) in varied concentrations and chemical or physical forms. Current methodologies used for safeguarding the uranium and plutonium fuel cycles are either unsuitable for quantifying many of these isotopes or lack the ability to differentiate between them effectively. This work presents an experimental evaluation of a portable Neutron Resonance Capture Analysis (NRCA) system sensitive to isotopes with neutron capture resonances in the epithermal range (1-100 eV). NRCA is a technique traditionally used for nuclear data collection and nondestructive assay of archaeological materials, typically conducted at large accelerator facilities with beamlines in excess of ten meters. This research miniaturizes the system to a two-meter beamline using a portable deuterium-tritium neutron generator. It builds upon the foundation of a portable Neutron Resonance Transmission Analysis (NRTA) system, utilizing capture gamma rays to generate a signal, in contrast to the neutron transmission measurements of NRTA. The NRCA technique is evaluated in this novel, portable configuration first using nonradioactive samples for optimization and then progressing to depleted uranium and thorium salt samples. Through a research partnership with Pacific Northwest National Laboratory, the technique was tested using highly enriched uranium, 233U and high-assay, low-enrichment uranium (HALEU) samples. Field portability tests demonstrated its ability to operate safely in field conditions, with operator doses remaining well within occupational limits. The system was able to identify multiple mid- and high-Z materials by reconstructing their neutron resonance profiles in experiments as brief as 20 minutes. It successfully differentiated between nuclear fuel cycle isotopes in composite samples as small as 2 grams, with limited success in quantifying the areal densities of uranium and thorium. These results suggest that NRCA, especially when used in concert with NRTA and other neutron-interrogation techniques, has the potential to rapidly and nondestructively quantify and characterize isotopes of interest in support of safeguards material accountancy.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157134</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Location, Location, Substation? How Battery Energy Storage Systems (BESS) Can Create Value in Unexpected Places</title>
<link>https://hdl.handle.net/1721.1/157127</link>
<description>Location, Location, Substation? How Battery Energy Storage Systems (BESS) Can Create Value in Unexpected Places
Schutt, Neal
The transition to renewable energy is a critical step in reducing global carbon emissions, yet it introduces new challenges for the aging electrical grid, particularly in urban areas. Battery Energy Storage Systems (BESS) are emerging as key infrastructure in this transition, capable of enhancing grid resiliency, managing peak loads, and facilitating the integration of renewable energy sources. Federal and state incentives and a recent sharp decline in the cost of battery cells have made BESS development economically viable. This thesis explores the potential of BESS to create public and economic value in underutilized urban spaces through the exploration of a hypothetical redevelopment proposal for the Alewife MBTA Complex in Cambridge, Massachusetts.&#13;
&#13;
The Alewife MBTA Complex presents significant challenges for redevelopment due to the high cost of demolishing the decaying existing structure. However, its proximity to a major substation and the increasing local demand for electricity make it an ideal candidate for a BESS project. This thesis demonstrates how integrating energy storage into the redevelopment of the site can enable an otherwise financially infeasible project.&#13;
&#13;
The paper provides an overview of the BESS development process, detailing each phase from creating a business strategy to disposition. It offers insights into the common challenges encountered, and how these might be navigated to optimize project outcomes. By breaking down the development timeline and key decision points, this thesis serves as a practical guide for real estate professionals to gain familiarity with Battery Energy Storage Systems. &#13;
&#13;
Through detailed financial modeling and analysis, including sensitivity testing, this research quantifies the expected financial performance of a BESS project at the Alewife site. The study concludes that BESS can unlock ‘found value’ in sites with little other economic potential. The findings suggest that incorporating BESS into real estate development projects can provide substantial public benefits, including enhanced grid resilience, lower energy costs, and increased property values, making it a strategic tool for urban planners and developers.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157127</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Sparse and Low Rank Matrix Optimization for Machine Learning Applications</title>
<link>https://hdl.handle.net/1721.1/157126</link>
<description>Advances in Sparse and Low Rank Matrix Optimization for Machine Learning Applications
Johnson, Nicholas André G.
Numerous fundamental problems in operations research, machine learning, and statistics exhibit natural formulations as cardinality or rank constrained optimization problems. Sparse solutions are desirable for their interpretability and storage benefits. Moreover, in the machine learning setting, sparse solutions exhibit superior model generalization and have a natural interpretation as conducting feature extraction in high-dimensional datasets. On the other hand, since the rank of a matrix is equivalent to the cardinality of the matrix's vector of singular values, rank can be interpreted as the matrix generalization of sparsity. Accordingly, low rank solutions inherit similar desirables properties as sparse solutions while allowing for very flexible modelling capability. Unfortunately, optimizing over cardinality and rank constraints is non-convex and NP-Hard in general which has led to strong reliance on convex relaxations and heuristic methods which yield sub-optimal solutions.&#13;
&#13;
This thesis advances both the theory and application of sparse and low rank matrix optimization, focusing on problems that arise in statistics and machine learning. We develop algorithmic approaches to problems exhibiting cardinality and rank constraints by leveraging techniques from mixed-integer and mixed-projection optimization. The proposed algorithms outperform existing convex relaxations and heuristics. Our rigorous analysis and empirical validation aim to contribute to both the theoretical foundations of optimization and the development of practical tools for complex challenges in statistics and machine learning.&#13;
&#13;
Chapter 2 studies the Sparse Plus Low Rank Matrix Decomposition problem. We present an alternating minimization algorithm that computes high quality feasible solutions and outperforms benchmark methods, scaling to dimension n=10000 in minutes. We additionally design a custom branch and bound algorithm to globally solve problem instances of dimension up to n=25 in minutes. Chapter 3 examines the Compressed Sensing problem, for which we present a custom branch and bound algorithm that can compute globally optimal solutions. Our approach produces solutions that are on average 6.22% more sparse on synthetic data and 9.95% more sparse on real world ECG data when compared to state of the art benchmark approaches.  Moreover, our approach outperforms benchmark methods when used as part of a multi-label learning algorithm. Chapter 4 explores the problem of learning a partially observed matrix that is predictive of fully observed side information, which consists of an important generalization of the Matrix Completion problem. We reformulate this problem as a mixed-projection optimization problem and present an alternating direction method of multipliers algorithm that can solve problems with n = 10000 rows and m = 10000 columns in less than a minute. On large scale real world data, our algorithm produces solutions that achieves 67% lower out of sample error than benchmark methods in 97% less execution time.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157126</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Risk and Optimizing Resilience of Digital and Physical Supply Chains</title>
<link>https://hdl.handle.net/1721.1/157125</link>
<description>Predicting Risk and Optimizing Resilience of Digital and Physical Supply Chains
Hu, Kevin
A number of disruptions and related challenges have affected the landscape of global supply chains in the past decade. These include the COVID-19 pandemic, geopolitical tensions, and cross-industry cyber breaches, highlighting the need for resilient and adaptive supply chain management. This thesis explores the role of data, machine learning, and analytics in developing predictive risk models to evaluate supply chain-related risks and optimizing the supply chain to improve resiliency. This thesis focuses on the two primary industry application domains of cybersecurity and the global shipping industry.&#13;
&#13;
Chapter 2 and 3 are motivated by the increasing prevalence of supply-chain related cyber breach incidents such as the SolarWinds breach in 2020. Chapter 2 develops the first predictive model for cyber risk that relies on innovative supply chain features. It utilizes large-scale data from more than 30,000 entity enterprises and their respective digital supply chain networks. In particular, this chapter develops descriptive features of the local supply chains of these entities, and then leverages these features to develop a supervised ML model for predicting whether an enterprise will experience a data breach incident. The results from this analysis demonstrate that local supply chain characteristics are significant predictors of data breach risk. Additionally, including supply chain features increases predictive power compared to baseline models that rely solely on internal enterprise features.&#13;
&#13;
Chapter 3 introduces an innovative global supply chain network graph and cyberattacker framework for modeling cyberattacker behavior in supply chain network environments. Theoretical analysis of this model proves that certain local supply chain characteristics determine an upper bound on the probability that an enterprise is compromised in this framework. Furthermore, the supply chain graph is calibrated with real data and then used to train an unsupervised reinforcement learning (RL) attacker agent. The agent traverses the supply chain network graph by cyberattacking and compromising nodes with the goal of maximizing its reward. The trained agent is used to produce an unsupervised risk assessment of the company nodes by simulating attacks within the network graph. The assessment, which is validated using public breach data, is competitive with basic unsupervised models and can significantly improve predictive performance when included as a feature for supervised models. An attractive aspect of this innovative modeling approach is that it does not require access to historical breach data needed for supervised models and algorithms, as unfortunately, the currently available data on cyber breaches is very partial and sparse.&#13;
&#13;
Chapter 4 develops a novel methodology for optimizing shipping container scheduling for the last leg in the shipping container global supply chain, called the \textit{drayage trucking} delivery process. The work in this chapter details the drayage trucking process from end-to-end and highlights key sources of inefficiencies throughout the process. An integer programming (IP) model is introduced to schedule each step in the drayage trucking delivery process to improve efficiency and minimize additional costs that are incurred as a result of inefficiencies in the container delivery schedule, which are known as \textit{accessorial charges}. The IP generates optimized schedules using industry delivery data, which are then compared with historical schedules. The results demonstrate that this approach can significantly decrease costs and improve container scheduling efficiency compared to current industry practices.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157125</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the Severity of a Cybersecurity Incident for Incident Reporting</title>
<link>https://hdl.handle.net/1721.1/157124</link>
<description>Quantifying the Severity of a Cybersecurity Incident for Incident Reporting
Conard, Chelsea Foushee
In the field of cybersecurity, the lack of standardized data collection and incident reporting&#13;
methods pose significant challenges to address and respond to incidents affecting critical&#13;
infrastructure. Various initiatives aim to resolve this issue by mandating the collection of&#13;
data on cyber incidents; however, there is often a lack of clear guidelines on how the collected&#13;
data will be utilized effectively.&#13;
This paper introduces the Cyber Incident Severity Scale (CISS), a framework designed&#13;
to guide the selection of relevant data for analysis and communicate the severity of a cybersecurity incident. By drawing insights from established scales in other fields, such as&#13;
natural disasters and public health, this research produces a single score for a reporting&#13;
entity which can be aggregated to determine the overall severity of an incident. The ability&#13;
to swiftly assess and score an incident is a critical tool to quantify incident severity and&#13;
prioritize response, support policy development, and bolster the overall security of critical&#13;
infrastructure.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157124</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond the Ovaries: Renaming a common yet neglected hormonal condition could be the key to unlocking better care for patients</title>
<link>https://hdl.handle.net/1721.1/157123</link>
<description>Beyond the Ovaries: Renaming a common yet neglected hormonal condition could be the key to unlocking better care for patients
Stewart, Lily
PCOS is a common hormonal condition found in 10 to 19 percent of people with ovaries. It frequently causes irregular periods and ovulation and is one of the most common forms of female infertility. However, the effects do not stop there. People with PCOS are at higher risk for a slew of health complications: insulin resistance, sleep apnea, depression, and anxiety. They are also more likely to develop metabolic syndrome—a combination of high cholesterol, high blood pressure, diabetes, and high waist-to-hip ratios. Together, many of these symptoms are risk factors for fatty liver disease or heart attacks and strokes. &#13;
&#13;
Despite the commonness and potential seriousness of the condition, many patients go undiagnosed, and those with diagnoses frequently go under-treated. The reasons for this are aplenty. PCOS’s cause is unknown. It has no known cure. It looks different from patient to patient. Its research is underfunded. Physicians do not learn much about it in medical school. &#13;
&#13;
But one reason at the root of it all, some experts say, is how tightly this condition has been intertwined with reproduction and fertility. Over the past decade, researchers and physicians who specialize in the condition have been pushing for everyone to recognize PCOS for what it is: a full-body endocrine syndrome with wide-reaching effects on health and quality of life. And one way to combat these is to change something fundamental about the condition: its name.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157123</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deformation and its surface expression in stressed planetary materials</title>
<link>https://hdl.handle.net/1721.1/157122</link>
<description>Deformation and its surface expression in stressed planetary materials
Seltzer, Cassandra
This thesis investigates the response of planetary materials to changing stress fields, and resultant signatures of stress in geophysical properties observable from planetary surfaces. When forces change within rocky and icy layers of planetary bodies, constituent materials of these layers adjust on the microscale; energetically favorable alignment of microstructural materials builds across scales to result in deformation, preferred directions for material transport and wave propagation, and heat release. This work therefore explores the relationship between microstructure and stress conditions in order to connect geophysical observations to the underlying forces on subsurface materials, using both experimental and computational methods. The first two chapters investigate two-phase deformation, where a partial melt phase is present between grains of solid materials such as olivine (Chapter 2) or ice (Chapter 3). Chapter 2 finds that in partially molten rocky materials, microstructural melt aligns parallel to the maximum applied stress direction quickly over geological time, while crystallographic orientations require significant strain intervals to reset. This shows that we can use the melt-induced changes to properties in the deforming Earth, for example, as an indicator of short-term stress fields. Chapter 3 applies these findings to the evolution of icy systems through simulated deformation of ice-melt aggregates, suggesting that current seismic studies which do not correct for the orientation of melt may misinterpret deformation at the base of warm ice sheets. The final two chapters center on deformation mechanisms that may shape the properties of icy outer Solar System satellites as they orbit their host planets. Chapter 4 provides novel experimental constraints on meteoritic materials relevant to the cores of icy moons, finding that microstructural brittle deformation, and resultant energy release, occurs even at very small differential stresses. Acoustic emissions associated with this brittle deformation are also more energetic at lower confining pressures, indicating that smaller, lower-pressure icy moons might receive enhanced heat from core deformation. The final chapter (Chapter 5) investigates crustal processes on Titan, Saturn’s largest moon. This work models how tidal stresses interact with local topographic stresses to create fracture across Titan’s crust, creating pathways for sediment generation and fluid transport.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157122</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Studies in biotic persistence and the taxonomic stability of traits over geological time</title>
<link>https://hdl.handle.net/1721.1/157121</link>
<description>Studies in biotic persistence and the taxonomic stability of traits over geological time
Tamre, Erik
It is increasingly recognized in evolutionary biology that biotic processes and pathways can be viewed as being under selection as well as organisms or populations. This view is particularly relevant when considering the history of the Earth’s biosphere over geological timescales, and the evolution of groups interacts with the evolution of processes in shaping the biosphere over time. This thesis considers a novel selection mechanism proposed to be operating on clades based on their age and tests its presence in marine animals over the Phanerozoic (Chapter 2); it also seeks to understand the interaction between some microbial traits and lineages over geological time as well as considering the implications of this interaction on the traits’ longevity. Chapter 3 considers the production of photoprotective pigment scytonemin, and Chapter 4 considers microbial iron oxidation. In these two chapters, I describe a metric called “clade fidelity” of a trait to describe its tendency to be associated with certain lineages and vertically inherited within them throughout the trait’s history, and I examine the relationship between a trait’s clade fidelity and its ecological context as well as evolutionary fate. The case studies in the thesis show that the proposed theoretical frameworks are applicable in practice and carry considerable explanatory power for the understanding of evolutionary processes on a scale of planetary history.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157121</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orbital stability in a classical pilot-wave system</title>
<link>https://hdl.handle.net/1721.1/157120</link>
<description>Orbital stability in a classical pilot-wave system
Liu, Nicholas Z.
The hydrodynamic bouncing droplet system, consisting of millimetric droplets bouncing on a vibrating fluid bath, displays many quantum mechanical phenomena on a macroscopic scale. These phenomena include tunnelling, diffraction and wave-like statistics. This thesis focuses on the features responsible for the quantisation of orbital radii, and rationalises this quantisation in terms of the stability of circular orbits arising in the presence of a rotating frame and a central force. We find that orbital quantisation is most pronounced when the waves generated by each bounce decay slowly. The wave decay rate, in turn, is related to the concept of path memory, the number of prior impacts with the bath that affect the droplet’s future dynamics. We conduct an analytical investigation into the stability of circular orbits using a generalised theoretical framework that allows for an exploration of classical pilotwave dynamics both inside and outside the experimentally accessible parameter regime. The exploration of parameter regimes beyond those accessible with the hydrodynamic system reveals much richer orbital dynamics. Our novel mathematical approach allows for evaluation of the integrals appearing in the stability problem in terms of Bessel functions of complex order, and thus facilitates asymptotic expansions of the stability problem in various limits. Within the experimental parameter regime, we demonstrate that in a rotating frame, circular orbits destabilise only via resonant instabilities, for which the growing perturbations oscillate at a frequency that is an integer multiple of the orbital frequency. Conversely, in a central force, non-resonant instabilities arise, for reasons detailed herein. Outside the experimental parameter regime, we show how the non-resonant instability leads to counter-intuitive scenarios; for example, circular orbits that are stabilised by increasing memory. In the limit of vanishing particle inertia, infinite path memory and a linear spring force, we demonstrate the intriguing possibility of infinitely many sharply quantised orbital states, where the allowed orbital radii exist in vanishingly thin intervals, and are stabilised by the combined influence of the time-averaged wave field and spring force. We demonstrate that these sharply quantised orbital states are only stable for higher memory. We then consider the effect of weak external forces on spin states, circular orbits arising in the absence of external forces, and show that the destabilisation of spin states depends in a complex manner on the type of external force applied. Finally, we show that the instability of large circular orbits is related to the in-line speed oscillations of free walking droplets in a manner that is independent of the external force.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157120</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trouble on the Range: When Does a National Park Become a Bison Zoo?</title>
<link>https://hdl.handle.net/1721.1/157119</link>
<description>Trouble on the Range: When Does a National Park Become a Bison Zoo?
Hartley, Sophia
Yellowstone National Park is often credited for bringing American bison back from the brink of extinction. In 1902, there were merely 25 individual bison in the park, but now, Yellowstone’s herd fluctuates between 3,000 and 5,500 animals. Over the past century, the national park’s conservation effort pushed bison into the public spotlight. The animal has become a symbol of the great American west, and recently, bison were named the US National mammal.&#13;
&#13;
Many of Yellowstone National Park’s bison reside in the park’s northern range, a 380,000-acre network of valleys, mountains, and river basins. One of these valleys, Lamar, is a hotspot for bison viewing, but, unbeknownst to many casual tourists, the area has also long-been the center of an intense scientific debate. &#13;
&#13;
Before thousands of bison covered the floor of Lamar Valley, a different hooved mammal stood in their place. Over the 19th and 20th centuries, hunting pressure, federal policy, and unnatural predator-prey relationships made Yellowstone’s northern range a haven for elk herds. As they proliferated in peace, elk chewed through the northern range’s preexisting ecosystems. Their appetites took a severe toll on native flora, which in turn, shrank habitats for other wildlife. Debates about park management and range science broke out between independent scientists and Yellowstone officials. The disagreements lasted for decades. But in the late 1990s, a whirlwind of decisions reduced (and maintained) elk herds to a more manageable level. Scientists thought that finally, the northern range’s native flora and fauna might have a chance to recover. &#13;
&#13;
For many years, it seemed like an ecological revival was beginning. But not in some places. Regrowth in regions of the northern range where bison heavily grazed were lagging behind. A growing body of research suggests that bison are having a similar adverse effect on Yellowstone’s ecosystems as the historic overabundance of elk. In Lamar Valley, many riverbanks are still devoid of trees, beavers are few and far between, and non-native species are increasingly prevalent. &#13;
&#13;
Yellowstone officials disagree with this consensus. Instead, they point to research showing how bison positively impact the landscape. In 2023, the park released a bison management proposal that has only intensified the debate. The proposal dismissed a large body of research as insignificant, going on to suggest an increase to the size of the park’s bison herd. In addition to concern about ecological degradation, many independent researchers are perplexed as to why Yellowstone — the world’s first national park — is seemingly intent on diminishing or ignoring the significance of legitimate scientific research.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157119</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Envisioning Water: Sustainability and Future-Making in Dubai and Los Angeles</title>
<link>https://hdl.handle.net/1721.1/157118</link>
<description>Envisioning Water: Sustainability and Future-Making in Dubai and Los Angeles
Christidi, Nadia
The following dissertation explores how the future of water is being imagined, planned and&#13;
prepared for in two dryland cities – hyper-arid Dubai and semi-arid Los Angeles – as the&#13;
climate changes and as they face increasing pressures to become more ‘sustainable.’ Both&#13;
Dubai and LA are cities that have long been deemed unsustainable, but are aiming to&#13;
become sustainability leaders. Dubai, which relies on energy-intensive desalination and has&#13;
high water consumption, including in ubiquitous urban greening, is investing heavily in&#13;
achieving efficiencies and powering water through clean energy. Los Angeles, which&#13;
sources the majority of its water through aqueduct systems from faraway places where&#13;
water is becoming increasingly taxed, is looking to produce more of its water supply&#13;
locally, and especially through wastewater recycling. Throughout the dissertation, I trace&#13;
the plans, projects, and policies being introduced in this vein to consider how&#13;
‘sustainability’ initiatives play out and get negotiated through the socio-political and&#13;
political economic structures in the two cities to unique effects.&#13;
To get at sustainability’s variegated forms and effects, I first view sustainability as a&#13;
“boundary object” (Star and Griesemer) and “technology of imagination” (Pederson et. al).&#13;
Treating sustainability as a “boundary object” that is shared but viewed differently by&#13;
actors enables me to hone in on the interests and forces - sometimes countervailing - that&#13;
shape sustainability projects. Treating it as a technology of imagination allows me to get at&#13;
the imaginative effects that sustainability projects constitute. Second, I consider how these&#13;
interests, forces, and effects emerge from and get mediated through entrenched structures&#13;
like bureaucratic systems, accumulation regimes, and sunken investments, which produce&#13;
a stickiness to infrastructures and infrastructural visions that renders change challenging,&#13;
slow, and incremental. As such, I show, for instance, how Dubai’s highly centralized&#13;
governance structure and foreign-investment development model produce an emphasis on&#13;
sustainability’s enhancement of the city-state’s competitiveness agenda that can belie&#13;
larger eco-realities, while LA’s fragmented institutional, regulatory, and financing scapes&#13;
complicate collaboration on recycling projects which span across and exceed individual&#13;
institutional mandates.&#13;
&#13;
Finally, alongside the municipal projects I focus on, I also look at visions of the future by&#13;
artists, designers, and architects to get at how the arts might provide alternatives that in some cases could help get beyond the stickiness of sustainability as it is currently being&#13;
imagined.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157118</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Astrophysical Simulations with GPUs: A Case Study of Radiative Transfer in arepo-rt</title>
<link>https://hdl.handle.net/1721.1/157117</link>
<description>Accelerating Astrophysical Simulations with GPUs: A Case Study of Radiative Transfer in arepo-rt
Verbeek, Erkin Emiel
Radiative transfer (RT) is a crucial ingredient for self-consistent modelling of numerous astrophysical phenomena across cosmic history. However, on-the-fly integration into radiation-hydrodynamics (RHD) simulations is computationally demanding, particularly due to the stringent time-stepping conditions and increased dimensionality inherent in multifrequency collisionless Boltzmann physics. The recent emergence of exascale supercomputers, equipped with extensive CPU cores and GPU accelerators, offers new opportunities for enhancing RHD simulations. We present the first steps towards optimizing the RHD solver AREPO-RT for such high-performance computing environments. We implement a novel node-to-node communication strategy that utilizes shared memory to substitute intranode communication with direct memory access. Furthermore, combining multiple internode messages into a single message substantially enhances network bandwidth utilization and performance for large-scale simulations on modern supercomputers. The single-message node-to-node approach also improves performance on smaller-scale machines with less optimized networks. Additionally, by transitioning all RT-related calculations to GPUs, we achieve a significant computational speedup of around 15x for standard benchmarks compared to the original CPU implementation. As a case study, we perform cosmological RHD simulations of the Epoch of Reionization, employing a similar setup as the THESAN project. In this context, RT becomes sub-dominant such that even without modifying the core AREPO codebase, there is an overall threefold improvement in efficiency. The advancements presented here have broad implications, potentially transforming the complexity and scalability of future simulations for a wide variety of astrophysical studies. This work serves as a blueprint for porting similar simulation codes based on unstructured resolution elements to GPU-centric architectures.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157117</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kent Kiehl’s Search for the Criminal Brain America’s self-proclaimed “psychopath whisperer” says he can predict criminality in incarcerated people. Is the legal system buying it?</title>
<link>https://hdl.handle.net/1721.1/157116</link>
<description>Kent Kiehl’s Search for the Criminal Brain America’s self-proclaimed “psychopath whisperer” says he can predict criminality in incarcerated people. Is the legal system buying it?
Hopkins, Sarah Rebecca
Since the 19th century, researchers have attempted to uncover the biological roots of criminality. The process has been both scientifically dubious and ethically fraught. While biological theories of criminal behavior faded after World War II, they arose again in the 1990s and early 2000s, when new brain imaging techniques collided with a growing interest in understanding how biological drivers of crime, if they exist, could be analyzed to understand, and even predict, criminal behavior. This thesis examines the research and claims of a prominent neuropsychologist within that historical context. He claims to have conducted promising brain research on incarcerated people that could uncover biological markers of criminal behavior, or even predict future criminality. Yet methodological and ethical questions have been raised about his research. Is it scientifically valid to have a brain-based view of criminal behavior? Is it ethically valid to assume that criminal behavior can be decoded from the brains of people incarcerated in a system that disproportionately impacts people of color and those from low socio-economic backgrounds? His critics are doubtful.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157116</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nipah: The history, and future, of one of the world’s most lethal viruses</title>
<link>https://hdl.handle.net/1721.1/157115</link>
<description>Nipah: The history, and future, of one of the world’s most lethal viruses
Viveros, Alex Gabriel
The Nipah virus kills around three quarters of people who contract it, making it one of the most lethal viruses known to infect humans. The virus first emerged in 1998, when hundreds of pig farmers in Malaysia fell ill with fevers and encephalitis, or brain inflammation. Nipah has caused smaller outbreaks in nearby Bangladesh nearly every year since then. The Malaysian farmers appeared to have been infected directly from their pigs, rather than from each other. For a time, there was no clear evidence that Nipah could spread from humans to other humans. That changed in April of 2004, when investigators responding to a Nipah outbreak in a remote district in Bangladesh discovered that the virus was spreading person to person. Pteropus fruit bats, which are native to South Asia, were identified as the natural reservoirs of the Nipah virus. Researchers have spent the last two decades studying the virus’ transmission in bats and how the virus spills over into humans. Institutions across the world have even recently started developing Nipah vaccines. Scientists believe the Nipah strains that currently circulate in humans are likely not transmissible enough to ignite a pandemic in people. That could change. Whether the virus one day evolves to spread better within humans, or hits a particularly susceptible place and thrives, officials worry about what could happen if Nipah ever affects larger populations. The Nipah virus is just one of many zoonotic pathogens that scientists are studying to understand how humanity can prepare for future deadly pathogens.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157115</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Contours of the Cloud: Dissecting the Real Estate Investment Decisions of Data Center Operators</title>
<link>https://hdl.handle.net/1721.1/157114</link>
<description>The Contours of the Cloud: Dissecting the Real Estate Investment Decisions of Data Center Operators
Fawcett, Robert Logan
This thesis investigates the real estate investment decisions of data center operators, with a focus on how key infrastructure characteristics influence data center development. Using a sequential econometric approach, the research applies both a logit and a hedonic model to evaluate the importance of various factors. The logit model explores the likelihood of data center development at the county level, highlighting geographical characteristics. The hedonic model examines the impact of specific site attributes, such as proximity to power infrastructure and fiber, on the scale of data center facilities in megawatts. The findings suggest that colocation data centers prioritize connectivity, electrical infrastructure, and urban proximity, while the location of hyperscale facilities is more variable and less predictable. This study enhances our understanding of how modern technological demands, particularly in the AI era, shape real estate strategies and offers insights into future trends in digital infrastructure investments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157114</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surgery Exact Triangles in Instanton Theory</title>
<link>https://hdl.handle.net/1721.1/157113</link>
<description>Surgery Exact Triangles in Instanton Theory
Bhat, Deeparaj
The introduction of instanton Floer theory and Donaldson polynomial invariants in the 1980s revolutionised the study of low dimensional topology. Since then, many Floer theories have been introduced with different structural properties and qualitative features. One of these Floer theories, Heegaard Floer theory, is popular due to its computational ease and rich algebraic structure. One of the computational tools absent in other Floer theories is the integer surgery formula that computes Heegaard Floer homology of 3-manifolds obtained by surgery along knot(s) in them. This thesis establishes a new surgery formula in instanton Floer theory. The algebraic language to express this formula is that of the derived category of chain complexes. The first part of the thesis describes this surgery formula whose statement and proof are inspired by the Atiyah-Floer conjectures. The second part then contrasts with the Heegaard Floer analogue by showing that instanton and Heegaard Floer theory cannot agree over integers.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157113</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Isotopically Labeled Fe and S-alkylated Iron-Sulfur Clusters</title>
<link>https://hdl.handle.net/1721.1/157112</link>
<description>Synthesis of Isotopically Labeled Fe and S-alkylated Iron-Sulfur Clusters
Linn, Brittany
Radical S-adenosylmethionine (SAM) enzymes (RS enzymes) use a 3:1 site-differentiated [Fe₄S₄]⁺ cluster to reductively cleave the SAM cofactor and generate a 5’-deoxyadenosyl radical intermediate (5’-dAdo•) that regio- and stereospecifically abstracts an H-atom from the target substrate. It has been proposed that 5’-dAdo• binds to the unique Fe site before abstracting an Hatom from the substrate. However, due to the transient nature of captured reaction intermediates, their precise structures have yet to be fully elucidated and, therefore, their role in the mechanism of RS enzymes remains unclear. Our group has established reliable methods of synthesizing alkylated [Fe₄S₄] clusters that can serve as models of organometallic intermediates in RS enzyme catalysis. These clusters are competent for radical release and, upon oxidation, undergo an alkyl migration process to yield S-alkylated clusters. A cluster species containing a unique alkylated Fe site with a coordination number greater than four is likely generated in these processes, although a stable cluster of this type has yet to isolated and crystallographically characterized. This work reports the synthesis of α-²H and α-¹³C isotopically labeled Fe and S ethyl ligated [Fe₄S₄] clusters to determine their electron-nuclear hyperfine parameters by ENDOR spectroscopy. These parameters will aid in identification of alkylated [Fe₄S₄] cluster intermediates generated in biological studies. Additionally, in an attempt to synthesize an [Fe₄S₄]⁺³ cluster with a five coordinate, Fe-alkylated site, a series of benzyl and phenyl ligated clusters were prepared and analyzed by NMR and EPR spectroscopies.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157112</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Properties of Colloidal II-VI and III-V Semiconductor Nanocrystals: Single Nanocrystal Photon Correlation Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/157111</link>
<description>Optical Properties of Colloidal II-VI and III-V Semiconductor Nanocrystals: Single Nanocrystal Photon Correlation Spectroscopy
Berkinsky, David
Colloidal nanocrystals (NCs), also known as quantum dots, are nanometer-sized semiconductor crystalline structures comprised of thousands to tens of thousands of atoms placing them in a world between the molecular-sized and the bulk-sized world, allowing them to harness unique qualities from both. Colloidal NCs are used in many applications including light-emitting diodes (LEDs), photovoltaics (solar cells), lasers, transistors, photocatalysis, and many more. In this thesis, I investigate the optical properties of colloidal NCs, specifically InP/ZnSe/ZnS, CdSe/CdS/ZnS, and ZnSe/ZnS NCs using a combination of ensemble and single NC photon correlation spectroscopic techniques. In the first chapter, I introduce the photophysical properties of colloidal NCs and spectroscopic techniques relevant to my studies. In the second chapter, I determine the dominant photoluminescent line shape broadening mechanisms in single InP/ZnSe/ZnS and CdSe/CdS/ZnS NCs using temperature dependent photoluminescent spectroscopic techniques. In the third chapter, I investigate the coherent emissive properties of single InP/ZnSe/ZnS and CdSe/CdS/ZnS at cryogenic temperatures, demonstrating the longest coherence time measured in a colloidal NC system to date. In the fourth chapter, I develop an ensemble third-order correlation technique to elucidate the average single ZnSe/ZnS NC triexciton efficiency and dynamics. Finally, I propose future directions in the fifth chapter, including a fourth order correlation technique to resolve absolute energy information on timescales faster than CCDbase spectroscopic techniques, and an open-access photon correlation Monte Carlo toolkit with the aim of filling education gaps and provide the colloidal NC community with a database of analytical tools that will encourage a wider audience to engage with photon correlation spectroscopy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157111</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Synergistic Understanding of Language Processing in Biological and Artificial Systems</title>
<link>https://hdl.handle.net/1721.1/157110</link>
<description>Towards Synergistic Understanding of Language Processing in Biological and Artificial Systems
Hosseini Asl, Eghbal
The faculty of language in the human brain relies on a network of frontal and temporal regions, predominantly in the left hemisphere, often defined as the “language network”. Despite decades of research aimed at uncovering the neural mechanisms underlying activity in this network, a computationally precise account has remained elusive. Over the past five years, artificial neural networks (ANNs) have achieved capabilities in the comprehension and production of language that are indistinguishable from those of humans, and their internal representations bear similarity to activity within the language network. In this thesis, I aim to build a synergistic understanding of language processing in both ANN models and the language network in the human brain by addressing three main questions: 1. When and how do human brains and ANN language models converge or diverge in their representations during language processing? 2. How does the amount of training data affect convergence between the human brain and ANN language models? 3. What computational mechanisms could underlie similarities in language processing between human brains and ANN language models?&#13;
&#13;
To answer the first question, I demonstrate that representational spaces converge between successful ANNs and the human brain, presumably driven by the statistics of their inputs. I show that brain responses to stimuli (sentences) that are represented similarly across multiple successful ANNs are easier to predict from model representations; in contrast, brain responses to sentences that are represented differently across models are challenging to predict, despite high consistency among human participants. Extending these findings to the domain of vision, I suggest that the principle of representation universality may underlie information processing across various domains.&#13;
&#13;
The second question addresses a common criticism of language ANNs: namely, that they are implausible as models of human language processing because they require  vastly more training data. Using two complementary approaches, I show that ANNs can build representations similar to those in the human language network even with a “developmentally realistic” amount of training data, approximately 100 million words.&#13;
&#13;
Finally, to answer the third question, I draw inspiration from computational neuroscience to reveal how ANN language models learn a predictive model of linguistic input. By focusing on representational geometry, I demonstrate that ANN models progressively “untangle” the temporal trajectory of a sentence’s representation via straightening—reduction in curvature between adjacent words as the input is passed through the model’s layers. Using this straightening mechanism, the ANN model recasts next-word prediction as a smooth linear extrapolation from the current internal state to a future state. Straightening emerges as a result of model training and scales with model size. Furthermore, the average degree of sentence straightening in the deep layers of the model correlates with corpus-based estimates of sentence surprisal, which are linked to human comprehension difficulty (e.g., as reflected in reading times).&#13;
&#13;
Collectively, these lines of work provide essential ingredients for building a more computationally precise model of language processing in the human brain, leveraging synergies with artificial neural network language models.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157110</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Drivers of Deforestation using Games on Spatial Networks</title>
<link>https://hdl.handle.net/1721.1/157109</link>
<description>Understanding Drivers of Deforestation using Games on Spatial Networks
Seby, Jean-Baptiste
As the impacts of climate change become more extensive and intense, effective actions for mitigation and adaptation become imminent. Since deforestation is a key driver of CO₂ emissions and forests constitute a crucial carbon sink, mitigating deforestation is an essential policy lever for governments. However, much of tropical deforestation results from actions of private entities that use the cleared land for activities such as palm oil tree cultivation, timber plantation, and agriculture. Often, the incentives to engage in (often illegal) deforestation within a forest concession are coupled with these activities and are also shaped by the activities in neighboring concessions. In this thesis, we focus on the problem of modeling these strategic interactions using game theory. We analyze a class of games in which agents engage in coupled activities over a spatial network and study a policy intervention to limit illegal deforestation.&#13;
&#13;
Firstly, we conduct equilibrium analysis of a game in which each agent decides the production levels of her coupled activities in the presence of network effects. Practically, these network effects are induced by spatial arrangements of concessions and their ownership structures. We consider the general case where network effects are heterogeneous, i.e. network effects influencing palm oil tree cultivation and time logging are described by different graphs. We provide a sufficient condition for existence and uniqueness of Nash equilibrium. This result follows by leveraging potential function of the game or via a general variational inequality. &#13;
&#13;
Secondly, we analyze how the spatial structure of concessions impacts the equilibrium outcome. In addition to the basic game in which each agent simultaneously engages in two activities, we consider a variation in which agents engage in one of the activities (but not both). We show that in both cases equilibrium structure can be expressed as a linear combination of weighted Bonacich centrality vectors -- a node-centrality measure that depends on the total number of walks that depart from a node (concession). Our analysis provides new insights on the drivers of illegal logging in forest regions where palm oil cultivation and timber logging are coupled.&#13;
&#13;
Thirdly, we evaluate the impact of ``edge removal’’ intervention policy in which the boundary between two neighboring concessions is monitored or a buffer is created between them. We characterize the policy of a social planner who is interested in maximally reducing the illegal production of timber. Interestingly, we identify a regime shift (or phase transition) as the local network effect and level of coupling between activities vary. This result identifies conditions for which the social planner should incentivize specialization (enforce production of palm oil trees or timber) versus diversification (allow for both palm oil trees and timber) cultivation.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157109</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards depth-resolved multi-cubic-centimeter field of view endoscopic camera for intraoperative nerve identification.</title>
<link>https://hdl.handle.net/1721.1/157108</link>
<description>Towards depth-resolved multi-cubic-centimeter field of view endoscopic camera for intraoperative nerve identification.
Yoon, Yong-Chul
One out of every five peripheral nerve injuries in the United States has an iatrogenic origin. These injuries can cause chronic neuropathies, paresthesia, and varying functional losses.&#13;
To reduce the risk of nerve injury, surgeons meticulously identify and track nerves within the surgical field using white-light magnification. However, small (sub-millimeter diameter)&#13;
and buried nerves are often difficult to identify with this approach. This has motivated a long-standing effort to develop improved nerve visualization technologies that are deployable in both open and minimally invasive surgical workflows. Fluorescence imaging is the most commonly explored strategy, and multiple exogenous fluorophores that bind to nerve-specific targets have been developed. However, fluorescence imaging has several limitations, including a disrupted workflow (due to the need for specialized lighting) and a significant regulatory burden. For these reasons, fluorescence-based nerve visualization has not yet been clinically adopted.&#13;
&#13;
Polarization-based optical coherence tomography (OCT) approaches to nerve visualization would inherently mitigate each of these translational challenges. First, OCT imaging is not affected by room light and thus can be used simultaneously with surgical lighting. Second, OCT is label-free and avoids regulatory pathways associated with new drug development. However, because OCT offers high-resolution, three-dimensional imaging. A surgical OCT system supporting video-rate acquisition of cubic centimeter fields would require signal capture bandwidths that are several orders of magnitude higher than what is available today. It is unlikely that this gap will be addressed through incremental advances in existing OCT platforms.&#13;
&#13;
In this thesis, we present a radically different OCT platform designed to aggressively reduce signal capture bandwidths while also simplifying the optical and electronic subsystem designs. The proposed approach is contour-looping (CL-) OCT (pronounced cloaked). It retains the depth-sectioning capability upon which OCT is based but discards the requirement of comprehensive three-dimensional imaging, which results in impractical signal capture bandwidths. As such, CL-OCT defines a strategy for low-bandwidth depth-sectioned imaging that may be sufficient for specific imaging tasks such as nerve identification. Importantly, the CL-OCT platform is compatible with a camera-based (i.e., scan-free) deployment that is advantageous for endoscopic deployments. In the second component of this thesis, we provide extensive theoretical and experimental studies on how optical amplifiers can be used in OCT to address sensitivity challenges of high-speed surgical OCT platforms like CL-OCT. Together, these lines of research define a new approach to meeting the need for OCT-based solutions for intraoperative nerve identification. This technology, if successfully translated, may lead to a lower incidence of iatrogenic nerve injury.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157108</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive and Prescriptive Trees for Optimization and Control Problems</title>
<link>https://hdl.handle.net/1721.1/157107</link>
<description>Predictive and Prescriptive Trees for Optimization and Control Problems
Kim, Cheol Woo
This thesis introduces novel methods to expedite the solution of a broad range of optimization and control problems using machine learning, specifically decision tree algorithms. In many practical settings, similar optimization and control problems often need to be solved repeatedly. We propose methods to leverage patterns from pre-solved problem instances using machine learning, leading to drastically faster solutions once training is complete. &#13;
&#13;
The thesis is structured into four parts, each tackling different class of optimization or control problems. In Chapter 2, we propose a machine learning approach to the optimal control of multiclass fluid queueing networks (MFQNETs). We prove that a piecewise constant optimal policy exists for MFQNET control problems, with segments separated by hyperplanes passing through the origin. We use Optimal Classification Trees with hyperplane splits (OCT-H) to learn an optimal control policy for MFQNETs. &#13;
&#13;
In Chapter 3, we study fluid restless multi-armed bandits (FRMABs), deriving fundamental properties and designing efficient numerical algorithms. Using these results, we learn state feedback policies with OCT-H and introduce a novel feature augmentation technique to handle nonlinearities.&#13;
&#13;
In Chapter 4, we propose a machine learning framework for solving two-stage linear adaptive robust optimization problems with binary here-and-now decisions and polyhedral uncertainty sets. We also introduce novel methods to expedite training data generation and reduce the number of different target classes the machine learning algorithm needs to be trained on. &#13;
&#13;
In Chapter 5, we introduce a prescriptive machine learning approach to speed up the process of solving mixed integer convex optimization (MICO) problems. We use a prescriptive machine learning algorithm, Optimal Policy Trees (OPT), instead of more commonly used classification algorithms. We demonstrate that OPT-based methods have a significant advantage in finding feasible solutions compared to classification algorithms.&#13;
&#13;
We test our approach on various synthetic and real-world problems. Using the proposed methods, we can obtain high-quality solutions to a broad range of large-scale optimization and control problems in real-time – within milliseconds.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157107</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Do High Street Retail Rents Align with the Economy? An Analysis of Retail Real Estate Pricing Dynamics Based on Macroeconomic Trends</title>
<link>https://hdl.handle.net/1721.1/157106</link>
<description>Do High Street Retail Rents Align with the Economy? An Analysis of Retail Real Estate Pricing Dynamics Based on Macroeconomic Trends
Xu, Yujian
This study closely examines the correlation between high street retail rents and key economic indicators, specifically Consumer Price Index (CPI) and Gross Domestic Product (GDP). Utilizing data on rent levels from prominent high streets globally, the analysis incorporates these macroeconomic indicators to discern patterns and relationships. Through methodologies such as multiple linear regression and Error Correction Model (ECM), the paper aims not only to analyze how high street retail rents align with CPI and GDP but also to explore the primary factors influencing these rents. In studying high street retail properties or considering the acquisition of such properties, this methodology can be used to determine whether a high street is susceptible to macroeconomic fluctuations. If not, it may be necessary to consider the uniqueness of the area or potential risks involved.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157106</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of Social Information on Reliance and Efficacy&#13;
in AI-assisted Prediction</title>
<link>https://hdl.handle.net/1721.1/157105</link>
<description>The Effect of Social Information on Reliance and Efficacy&#13;
in AI-assisted Prediction
Alsobay, Mohammed
This work addresses an under-explored aspect of people's utilization of algorithmic decision support systems: How do people perceive and use these systems under social influence? Through a pre-registered randomized human-subject experiment, I study the effect of two forms of social information-direct conversations and summarized peer decisions----on users' reliance and effectiveness in leveraging algorithmic advice across a series of decision-making tasks, and how t he availability of local model explanations and performance feedback moderates this effect. I find t hat, on average, neither form of social information affects t rust directly, yet they both moderate t he extent to which feedback and model explanations influence trust in the algorithm. However, while social information can influence trust in the algorithm, I detect no effect on how effectively people utilize algorithmic advice. By describing this interplay between social information, algorithmic transparency, and user behavior, this work contributes to recent research on collective intelligence and sociotechnical approaches to human-AI interaction.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157105</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Curve of Inflation Expectations and Firms’ Investments</title>
<link>https://hdl.handle.net/1721.1/157104</link>
<description>The Curve of Inflation Expectations and Firms’ Investments
Perinelli, Giuditta
Using rich survey data on Italian firms, this paper studies the formation mechanisms of inflation expectations at different forecasting horizons. Starting from empirical evidence embedded in firms’ inflation expectation curve, we obtain 3 main findings: (1) firms extrapolate for long forecasting horizons, (2) inflation forecasts overreact (underreact) at long (short) forecasting horizons, (3) long-term inflation expectations impact investment decisions. Specifically, we find that a 1% wedge between the 4-year and 1-year ahead expected inflation is associated with a 0.8% increase in the probability of investing. What motivates this result? After ruling out alternative channels of (1) an increase in expected demand, (2) a decrease in supply of input goods, and (3) an improvement in financing conditions, we claim that a decrease in the perceived cost of capital is the main driver.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157104</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Seoul Apartment Prices during Population Decline Era</title>
<link>https://hdl.handle.net/1721.1/157103</link>
<description>Analysis of Seoul Apartment Prices during Population Decline Era
Cho, Moohyun
Since the early 2020s, South Korea has faced a population decrease due to the lowest birth rates globally, but the apartment prices in capital regions covering Seoul, capital city of South Korea and Gyeonggi-do, have ironically shown a consistent upward trend. This thesis explores the persistent rise in apartment prices despite diminishing population in Seoul, providing insights into the economic and social factors affecting this trend. Through an analysis of the characteristics of Seoul apartments, including the unique Jeonse system, and the impacts of population trends by region, this research demonstrates the broader implications of single person household trends and aging population demographics. Furthermore, comparative case studies from Japan and France supports the relationship between aging populations and housing markets. By applying various indices related to apartment prices, this study demonstrates the correlations between apartment prices and demographic changes, consequently exploring the potential future scenarios for the housing market in Seoul.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157103</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Memory and fluctuations in chemical dynamics</title>
<link>https://hdl.handle.net/1721.1/157102</link>
<description>Memory and fluctuations in chemical dynamics
Farahvash, Ardavan
This thesis describes the development and application of theories that elucidate both the static and time-dependent responses of various condensed-phase environments to molecular systems. Part I, the cornerstone of this thesis, explores the role of surface vibrations in gas-phase heterogeneous catalysis. Utilizing the Mori-Zwanzig projection operator formalism, I have developed a theory that maps surface vibrations to a generalized Langevin equation (GLE). Two projection schemes are considered. The first scheme projects the motion of the entire solid substrate onto the motion of molecular adsorbates. The second scheme projects onto both the motion of adsorbates and of surface adsorption sites. Through the first approach, I demonstrate that physisorbed species primarily couple with acoustic phonons, while chemisorbed species couple with dispersionless local vibrations. I also use this scheme to examine how phonons affect reactions rates, both in ensembles near and far-from thermal equilibrium. Using the second approach, I study how energy is dissipated in simulations of molecule-surface scattering. I demonstrate that phonon confinement effects from nanoscale simulations can significantly impact calculated surface sticking coefficients. Part II considers the role of solvent in adsorption and desorption at liquid-solid interfaces. Specifically, I employ enhanced sampling methods to study a model system of carbon monoxide at a water/platinum interface. Using these methods, I show that the local coordination number around a CO molecule plays a crucial role in the transition states of the adsorption/desorption process, and that CO tends to increase its coordination number before desorbing. Part III develops a machine learning and electronic structure framework for the computationally efficient parametrization of Frenkel Hamiltonians from snapshots of molecular dynamics simulations of organic semiconductors. Direct electronic structure calculations on these snapshots encode the nuclear fluctuations of the chromophores in the material and how they couple to excitons, but at enormous cost. I discuss how the strategic application of machine learning methods can drastically reduce the number of electronic structure calculations needed to produce a complete exciton trajectory. Critically, I demonstrate that by decomposing the two-molecule excitonic coupling into interactions between one-molecule transition monopoles, a more accurate and less data-intensive machine learning scheme can be devised.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157102</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>No One Wants To Be A Parasitologist: The Shrinking Field of America's Least Favorite Animals</title>
<link>https://hdl.handle.net/1721.1/157101</link>
<description>No One Wants To Be A Parasitologist: The Shrinking Field of America's Least Favorite Animals
Richter, Hannah
Parasites have a bad rap. Most people think of them as scary, gross, or both, but they are also diverse creatures that have evolved in and on every animal and ecosystem on the planet. Parasitism is the most successful way of life for an animal — representing more than 40% of all species — and the wormy and crawly creatures it encompasses are vastly understudied. An increasing volume of research shows that parasites play important ecological functions, from keeping animal populations in check to stabilizing food chains to driving evolution and biodiversity. While parasites can cause horrible human suffering, especially in countries without reliable clean water or sanitation systems, only a fraction of parasites affect humans, with estimates as low as 0.1%. &#13;
&#13;
As climate change and habitat loss threaten animals, so too do they endanger the parasites that live on and inside them. At the same time parasite biodiversity faces shrinkage, the field of parasitology reckons with its own crisis: membership in the American Society of Parasitologists has declined by 76% in the past 50 years, and many of the world’s most important parasitologists are elderly or dead. To revitalize the field, parasitologists are charming younger generations with parasite Pokémon cards and stuffed animals and attempting to integrate parasites into global conservation programs. One main question is on parasitologists’ minds: How can they convince people to discover, catalog, and understand the world's parasite biodiversity before parasites, the field’s leaders, and their valuable knowledge die off?
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157101</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-molecule diagnostics to support curative interventions for tuberculosis and HIV</title>
<link>https://hdl.handle.net/1721.1/157100</link>
<description>Single-molecule diagnostics to support curative interventions for tuberculosis and HIV
Dougan, Tyler J.
Tuberculosis (TB) and the human immunodeficiency virus (HIV) are two of the leading causes of death worldwide. Tuberculosis is curable, but because of the difficulties of diagnosing it, many people with TB—and the majority of those killed by it—never begin treatment. HIV can be treated with lifelong medication. But if drug resistance develops or treatment is interrupted, the virus resurges. This could be prevented by an HIV cure that either clears HIV from the body or keeps the virus suppressed without continued therapy. Next-generation diagnostics will play a central role in supporting access to existing TB cures and future HIV cures. In this thesis, I describe the advancement of digital enzyme-linked immunosorbent assay (ELISA) protein detection methods in service of curing these two deadly infectious diseases.&#13;
&#13;
Existing TB diagnostics rely heavily on sputum, which is highly infectious, leading to increased TB cases among health care workers and limiting access to places with appropriate biosafety precautions. We developed a multiplexed Single Molecule Array (Simoa) digital ELISA that can diagnose TB from biomarkers in urine. Our assay is highly sensitive, as demonstrated in diverse cohorts totaling approximately 600 individuals.&#13;
&#13;
Simoa is a robust and widely used platform, but its accessibility is limited because it relies heavily on advanced microwell and imaging technology. We developed a new digital ELISA platform, called Molecular On-bead Signal Amplification for Individual Counting (MOSAIC), that performs the final readout step with a flow cytometer, bringing digital ELISA within reach of many hospitals and other health care centers. In addition to reducing instrumentation and cost, MOSAIC also allows for greater sensitivity and higher-order multiplexing than Simoa. It is, to our knowledge, the most sensitive protein measurement technique ever developed, with attomolar limits of detection.&#13;
&#13;
Finally, I describe the application of MOSAIC toward the development of HIV cures and longer-acting antiretroviral medications. These depend on a deeper understanding of the biology of HIV, and when they are ready for clinical trials, will also need highly sensitive tests to characterize the virus-host interactions and determine whether they are working. We developed ultrasensitive Simoa and MOSAIC assays for 20 circulating host and viral proteins and measured them in a cohort of 17 individuals with HIV whose treatment was interrupted, to evaluate which biomarkers could predict when the virus would rebound. Baseline levels of these biomarkers did not predict viral rebound, but changes over time did, highlighting the need for scalable personalized approaches.&#13;
&#13;
HIV and TB are two of the great diseases in the world. The next generation of diagnostic technologies, a urine test conducted on expensive instrumentation, and newly identified circulating biomarkers will not in themselves solve these problems. But these more sensitive assays are one step closer to the true biology of these diseases, and these advances in accessibility bring this this ultrasensitive monitoring one step closer to the clinic.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157100</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Looking at the Map, Together: Modeling Treatment Center Location Selection and its Effects on Access to Gene Therapy in Brazil</title>
<link>https://hdl.handle.net/1721.1/157099</link>
<description>Looking at the Map, Together: Modeling Treatment Center Location Selection and its Effects on Access to Gene Therapy in Brazil
Wertheimer, Sarah R.
Choosing at how many and which treatment centers to offer a gene therapy to patients is&#13;
a crucial decision which impacts how far the treatment has to be transported and how far&#13;
patients have to travel to receive treatment. Many gene therapies are for patients with severe diseases that make it difficult to travel. On the other hand, cold chain requirements&#13;
make shorter transportation preferable for gene therapies, and few centers have prior experience handling them. Using multi-criteria optimization modeling paired with local input,&#13;
this thesis explores different approaches to the gene therapy treatment center location selection decision and how these approaches would affect patients’ geographic accessibility to&#13;
treatment.&#13;
We focus on Brazil and a specific gene therapy product as our case study. We interview&#13;
local pharmaceutical company employees to understand the stakeholders involved in this&#13;
decision and the approaches being considered. We model how these approaches would affect patients’ geographic accessibility to treatment and discuss potential modifications to&#13;
our model. Finally, by means of an interactive workshop, we explore the decision-making&#13;
discussion between stakeholders in choosing which approach to follow.&#13;
We find that the approaches under consideration result in a wide range of geographic accessibility for patients. Early stage decisions have impacts across stages, and even therapies,&#13;
due to a reluctance to select new locations. Patients in the northwest of Brazil would need&#13;
stakeholders to consider candidate locations beyond government reference centers or those&#13;
with gene therapy experience, in order to have a treatment center nearby. Regarding facilitation, we find that quick, low-stakes modeling and joint discussion could allow stakeholders&#13;
to consider approaches they might not otherwise consider.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157099</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Wildfire Suppression: A branch-and-price-and-cut approach</title>
<link>https://hdl.handle.net/1721.1/157098</link>
<description>Optimizing Wildfire Suppression: A branch-and-price-and-cut approach
Wachspress, Jacob
In periods of intense, synchronous wildfire activity, fire system managers must make rapid fire prioritization decisions over a disperse geographic area with limited suppression resources. This thesis defines the Wildfire Suppression and Crew Assignment Problem, which optimizes resource allocation to triage fires based on damage risk, crew availability and spatiotemporal dynamics. We formulate a two-sided set partitioning model on time-space-rest networks for crew assignments and time-state networks for fire damage, with linking constraints between both; this representation can encode a broad class of non-linear wildfire spread models and diverse suppression objectives. To solve it, we develop a two-sided column generation algorithm that generates fire suppression plans and crew routes iteratively. We embed it into a branch-and-price-and-cut algorithm to retrieve an optimal integer solution, using novel special-purpose cuts that augment generalized-upper-bound cover cuts and a novel branching rule that leverages dual information from the linking constraints. Extensive computational experiments show that the algorithm scales to practical problems that remain otherwise intractable. The optimization methodology can provide high-quality solutions by jointly optimizing wildfire triaging and crew assignments, resulting in enhanced wildfire suppression effectiveness.In periods of intense, synchronous wildfire activity, fire system managers must make rapid fire prioritization decisions over a disperse geographic area with limited suppression resources. This thesis defines the Wildfire Suppression and Crew Assignment Problem, which optimizes resource allocation to triage fires based on damage risk, crew availability and spatiotemporal dynamics. We formulate a two-sided set partitioning model on time-space-rest networks for crew assignments and time-state networks for fire damage, with linking constraints between both; this representation can encode a broad class of non-linear wildfire spread models and diverse suppression objectives. To solve it, we develop a two-sided column generation algorithm that generates fire suppression plans and crew routes iteratively. We embed it into a branch-and-price-and-cut algorithm to retrieve an optimal integer solution, using novel special-purpose cuts that augment generalized-upper-bound cover cuts and a novel branching rule that leverages dual information from the linking constraints. Extensive computational experiments show that the algorithm scales to practical problems that remain otherwise intractable. The optimization methodology can provide high-quality solutions by jointly optimizing wildfire triaging and crew assignments, resulting in enhanced wildfire suppression effectiveness.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157098</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis Using State Space Global Coherence of Brain Dynamics in a Young Child Under Sevoflurane General Anesthesia</title>
<link>https://hdl.handle.net/1721.1/157097</link>
<description>An Analysis Using State Space Global Coherence of Brain Dynamics in a Young Child Under Sevoflurane General Anesthesia
Gallo, Sebastian A.
The dynamics of brain states under general anesthesia in infants are complex and exhibit significant developmental changes, particularly in the context of neurophysiological responses. Traditional EEG analysis has been valuable in tracking these changes, but there is a critical need for more precise, quantitative methods to assess neural synchrony and coherence in this vulnerable population. This thesis explores advanced state-space modeling techniques, specifically focusing on State Space Global Coherence (SSGC), to estimate global coherence (GC) during sevoflurane general anesthesia in an infant. Two different SSGC approaches were employed: one approach directly estimated GC from the data, while the other first estimated the covariance matrix and then used this matrix to compute GC. The SSGC approaches were first applied to a validation dataset that had been previously analyzed using SSGC for covariance estimation. This was done to ensure that my analysis was functioning correctly by validating it against a dataset with known outcomes before proceeding with exploratory analysis. Once this was certain, the next step involved applying this pipeline to EEG data from a 10-month-old infant—a dataset where SSGC had not been previously utilized. Following this, both the validation dataset and the infant dataset were used to compare the effectiveness of SSGC for covariance estimation versus direct GC estimation. The infant dataset, in particular, provided an opportunity to explore the utility of SSGC in a new context. Both datasets that the SSGC methods were applied to had a low signal to noise ratio. This revealed that direct GC estimation provided improved temporal resolution for GC and the ability to capture dynamic changes in coherence over time. In contrast, SSGC for covariance estimation produced results nearly identical to empirical GC, suggesting that it is more susceptible to noise. The resilience of direct GC estimation to noisy data highlights its potential as a robust tool for capturing the spatiotemporal dynamics of neural synchrony under anesthesia. This thesis emphasizes the importance of advanced modeling techniques in enhancing neurophysiological monitoring, with significant implications for improving pediatric anesthetic care and outcomes.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157097</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>State Estimation in Dynamical Robotic System with Non-Gaussian Noise</title>
<link>https://hdl.handle.net/1721.1/157096</link>
<description>State Estimation in Dynamical Robotic System with Non-Gaussian Noise
Jin, David
State estimation is critical for robot operation. Most estimation algorithms assume that the robotic sensor measurements are contaminated by Gaussian noise. However, in practical applications, the noise is often non-Gaussian, heavy-tailed, or even multi-modal. In this thesis, we develop algorithms that perform state estimation in dynamical systems with arbitrary noise and prove their theoretical guarantees. We tackle two challenging state estimation problems: multi-model point cloud registration and state estimation in polynomial dynamical systems, both contaminated by non-Gaussian noise. In the multi-model 3D registration problem, we are given two point clouds picturing a set of objects at different poses (and possibly including points belonging to the background) and we want to simultaneously reconstruct how all objects moved between the two point clouds. We propose a simple approach based on Expectation-Maximization (EM) and establish theoretical conditions under which the EM approach recovers to the ground truth. We evaluate the approach in simulated and real datasets ranging from table-top scenes to self-driving scenarios and demonstrate its effectiveness. For state estimation in polynomial systems corrupted by arbitrary noise, we develop a new filtering approach called the Generalized Moment Kalman Filter (GMKF). The GMKF formulates the prediction and update steps as polynomial optimization problems (POP) and solves them using moment relaxations, carrying over a possibly non-Gaussian belief. In the linear-Gaussian case, GMKF reduces to the standard Kalman Filter. We demonstrate that GMKF performs well under highly non-Gaussian noise and outperforms common alternatives, including the Extended and Unscented Kalman Filter, and their variants on matrix Lie groups. We also showcase applications to challenging landmark-based and lidar-based robot localization problems.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157096</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Phight for Phage: Understanding Bacteriophage Therapy in Aquaculture and Human Health</title>
<link>https://hdl.handle.net/1721.1/157095</link>
<description>The Phight for Phage: Understanding Bacteriophage Therapy in Aquaculture and Human Health
Cornman, Eva
In the wake of the antibiotic resistance crisis, alternative options to prevent and treat bacterial infections are desperately needed. Researchers across the world are turning to the most abundant &#13;
biological particle on our planet: bacteriophage. Often called phage, these microscopic viruses infect bacteria, and their high specificity and incredible abundance may make them viable treatment options. Scientists have known about phage for over a century, but renewed interest over the past few decades has spurred a wide variety of research into the biology and applications of these viruses. The benefits, and some of the challenges, of phage therapy for both &#13;
aquaculture and human health are discussed here.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157095</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancements in Models and Algorithms for Management Science</title>
<link>https://hdl.handle.net/1721.1/157094</link>
<description>Advancements in Models and Algorithms for Management Science
Yao, Yuanfan (Evan)
Management science is an interdisciplinary field that leverages a variety of analytical techniques to inform effective decision-making within businesses and organizations. It is a dynamic field that is continuously innovating as data becomes increasingly available and businesses leverage new digital technologies. As a result, there is a constant need to develop models and algorithms to address unique decision-making settings. This thesis is composed of three independent chapters, each of which proposes novel modeling insights and algorithmic solutions for real-world problems.&#13;
&#13;
Chapter 2 studies a mathematical model in online resource allocation where a decision-maker must efficiently allocate a scarce resource to patient and impatient customers. This study is motivated by recent advancements in on-demand online platforms (such as Uber and Instacart) where customers who are patient (e.g., can wait a few minutes for a ride) are offered a discounted price. Under this model, we develop a simple resource allocation policy that has provable theoretical guarantees under a competitive ratio analysis and is also easy to use in practice. Our work supports the managerial intuition that offering discounts for patient customers leads to more robust and efficient resource allocation.&#13;
&#13;
Chapter 3 addresses the challenge of organizing a large corpus of documents into an expert-defined labeling scheme without manual annotation or labeled training examples. This work is motivated by a collaboration with a major pharmaceutical company to streamline root cause analysis of deviations in the manufacturing process. In investigating a new deviation, quickly finding related historical deviations is crucial, but such deviation reports are not organized in a way to facilitate this task. This chapter proposes an innovative methodology called Document Classification with Reference Information (DCRI), which crucially leverages the existence of reference information, documents which describe the taxonomy of interest but are not labeled examples themselves. Empirical results show that DCRI can produce highly accurate labels with minimal intervention from subject matter experts. Based on these empirical findings, we develop a mathematical model for the underlying data generating process and propose both numerical and theoretical finds that further justify the DCRI approach.&#13;
&#13;
Chapter 4 studies a novel way of generating insights from black-box classification models by deriving simple conditions under which the model predicts confidently. Existing work on explaining binary black-box classifiers typically studies when the model predicts 1 or 0 without accounting for the confidence (i.e., probability) of the prediction. Our work argues that explaining when a model makes confident predictions is more useful to a practitioner as such predictions typically correspond to when a model is more accurate and reliable. We define a novel evaluation metric for black-box explainers which emphasize confident predictions and develop a local-search based methodology to find interpretable lists of if-then rules that optimize for this metric. Evaluation on six real-world datasets suggest that such rule-based explanations are effective at capturing highly confident data points. By targeting highly confident predictions of black-box model, our methodology generates rules that are more useful than existing approaches which only explain a classifier's binary predictions.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157094</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultramafic Alteration and the Cooling of Earth and Mars</title>
<link>https://hdl.handle.net/1721.1/157093</link>
<description>Ultramafic Alteration and the Cooling of Earth and Mars
Murray, Joshua
This thesis deals with the influence of `ultramafic' rocks over the climate of planets. Ultramafic rocks, rich in Mg and Fe, are the most common rocks on Earth but exist primarily in the mantle and rarely outcrop on the surface. They are incredibly unstable under Earth's surface conditions where they are altered via incongruent reactions which form clay minerals, iron oxides, and ultimately release cations to the ocean. Due to their instability, they play an out-sized role in Earth's long-term carbon cycle. My first chapter investigates a hitherto unappreciated mechanism by which ultramafic rocks serve as a carbon sink, through the formation of high-surface-area clays and the resultant burial of organic carbon. I use a combination of mineral weathering models and proxy data to show that this mechanism has contributed to the glaciations of the Palaeozoic (541 - 252 Ma).&#13;
&#13;
Unlike Earth, igneous rocks on the Martian surface are frequently of ultramafic composition. My second chapter argues that the alteration of these Martian ultramafic rocks was fundamental in the cooling of the planet from a habitable surface with liquid water to a cold and icy planet, largely devoid of an atmosphere. I show that the same high-surface-area clay minerals which bury organic carbon on Earth are prevalent enough on Mars to store the bulk of its initial 1-4 bar atmosphere as adsorbed methane. I postulate that this methane was formed abiotically during hydrothermal alteration of ultramafic rocks, a process which is observed in ultramafic systems on Earth. I show that this framework reconciles the histories of \dc{} and atmospheric loss-to-space on Mars.&#13;
&#13;
My final chapter quantifies the effects of the alteration of ultramafic and mafic rocks across the Taconic orogeny in Newfoundland, Canada. This collision exposed one of the most well-studied ultramafic bodies on Earth, the Bay of Islands ophiolite, and closely preceded global cooling in the Middle-Late Ordovician (470-445 Ma). I present a new method, leveraging both geochemical analysis and modelling of basin sediments, to infer ancient silicate weathering fluxes. I show that the relative weathering rate in this region increased dramatically during the Taconic orogeny. This method could be applied throughout systems with tectonically-driven changes in surface lithology to build a fuller understanding of the forces which modulate Earth's climate. &#13;
&#13;
My work asks as many questions as it answers but tries to honestly portray the uncertainties associated with the application of quantitative methods in noisy, geologic systems. I hope that in trying to meaningfully constrain these processes I plant seeds of inquiry from which myself and others can one day make more concrete statements of the cause and effect between tectonics and climate.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157093</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing a Tiled Singular Value Decomposition: A Framework for Tiled Linear Algebra in Julia</title>
<link>https://hdl.handle.net/1721.1/157092</link>
<description>Implementing a Tiled Singular Value Decomposition: A Framework for Tiled Linear Algebra in Julia
Ringoot, Evelyne
High-performance computing (HPC) is essential for scientific research, enabling complex simulations and analyses across various fields. However, the specialized knowledge required to utilize HPC effectively can be a barrier for many scientists. This work introduces a hardware-agnostic, large-scale tiled linear algebra framework in Julia designed to enhance accessibility and usability without compromising performance. By providing a flexible abstraction layer, the framework simplifies the development and testing of new algorithms across diverse computing architectures. Julia language’s multiple-dispatch and type inference facilitate the development of type-agnostic, hardware-agnostic, and multi-use frameworks by allowing composability. Utilizing a tiled approach, the implemented framework improves data locality, parallelism, and scalability, making it well-suited for modern heterogeneous environments. Its practical benefits are demonstrated through the implementation of tiled QR-based singular value decomposition (SVD), demonstrating how it streamlines the development process and accelerates scientific discovery. The developed framework is used to implement an in-GPU tiled SVD and an out-of-core GPU-accelerated SVD. Furthermore, its extensibility is demonstrated by implementing a tiled QR algorithm. This work aims to democratize HPC resources by bridging the gap between advanced computational capabilities and user accessibility, empowering a broader range of scientists to fully leverage modern computing technologies.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157092</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instability Scaffolding: Enacting Strategic Instabilities to Produce Authentic Premium Wine</title>
<link>https://hdl.handle.net/1721.1/157091</link>
<description>Instability Scaffolding: Enacting Strategic Instabilities to Produce Authentic Premium Wine
Zhang, Alan
Unstable conditions can be a risk to productions, potentially disrupting operations and rendering activities unpredictable. While a common organizational response is to minimize instability, I find instead that producers can also purposefully cultivate it to generate value—through strategic instabilities. My dissertation explores how strategic instabilities are enacted in productions of fine wine, articulating the practices and arrangements that facilitate working with unstable production conditions in productive ways—a process I refer to as instability scaffolding. My data are drawn from a 16-month ethnographic study of two field sites in the California premium wine industry, combined with archival data and industry interviews. In Chapter One, I explain why minimizing certain sources of instability, while potentially more efficient, would be considered inauthentic for premium wine productions. In Chapter Two, I look historically at the California premium wine category, and explain why and how working with instabilities of nature became a basis of its authenticity. This chapter examines the instability scaffolding (i.e., cooperative category framing work) performed in the California wine industry to enable such productions to become commercially viable, and identifies the intra-category mutualism that motivated competitors to support such productions. Chapter Three offers insight into the modern-day operations of a world-renowned fine wine producer. I identify the trajectory management work scaffolding this organization’s achievement of craft authenticity, turning production instability into productive instability so that high-quality wines are produced consistently year after year despite relying on unpredictable activities. Chapter Four explores a regional-level instability scaffolding allowing many producers to keep their operations logistically feasible despite working with unstable conditions. I show how vineyard proprietors and contract providers worked together to sustain craft authenticity at scale in the region through a process I theorize as contract custodianship. My dissertation concludes in Chapter Five with a discussion of instability scaffolding more broadly and its implications for further research. I highlight how my research contributes new insights into the multiple ways organizations can leverage complex interactions in product by skillfully engaging with them to express authenticity in productions at commercial scale.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157091</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Opinion Dynamics to Collective Action: How Identity-Based Tolerance Leads to Political Extremism</title>
<link>https://hdl.handle.net/1721.1/157090</link>
<description>From Opinion Dynamics to Collective Action: How Identity-Based Tolerance Leads to Political Extremism
Liang, Chen E.
Current sociological theories attribute the recent surge in political extremism to mechanisms of opinion “homophily” (i.e., like-minded individuals interact more while dissimilar ones might distance) and “assimilation (i.e., interactions homogenize opinions),” which collectively suggest a social world dominated by extreme views. Yet, this view contradicts empirical evidence showing that extremists still represent a minority and individual opinions remain largely stable. We resolve this apparent paradox by illustrating how extreme collective action can arise from a moderate majority that retains moderate opinions yet responds positively to recruitment by extremists. We break down this task into three steps. First, we theoretically distinguish between opinion homophily and identity homophily (i.e., individuals who share the same identity interact more). Second, we develop an agent-based model to manipulate the strength of identity homophily relative to opinion homophily, while excluding the effect of assimilation (i.e., holding opinions constant). Our model reveals that strong identity-based tolerance can create a “radicalized” structure, which allows extremists and moderates–who disagree in opinion but share an identity–to maintain stable relationships in emergent clusters; Further, the structure concentrates extremists at the center of the clusters, enabling them to form a critical mass that enlists a broader population. Finally, beyond confirming our expectations, we uncover unexpected model behaviors by exploring how the "radicalized" structure can transition between three other distinct structures the model generates. We show that homogeneous groups, often seen as indicators of polarization, could paradoxically be key to reducing organized extremism when dominated by moderates who can effectively mobilize collective action while marginalizing extremists.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157090</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Social Science: Language Models as Scientist and Subjects</title>
<link>https://hdl.handle.net/1721.1/157089</link>
<description>Automated Social Science: Language Models as Scientist and Subjects
Manning, Benjamin S.
We present an approach for automatically generating and testing, in silico social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are both proposed and tested by the system, finding evidence for some and not others. We provide evidence that the insights from these simulations of social interactions are not available to the LLM purely through direct elicitation. When given its proposed structural causal model for each scenario, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those estimates. In the auction experiment, the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from the LLM are inaccurate. However, the LLM's predictions are dramatically improved if the model can condition on the fitted structural causal model. In short, the LLM knows more than it can (immediately) tell.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157089</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of Microbial Primary and Secondary Metabolism in the Marine Realm</title>
<link>https://hdl.handle.net/1721.1/157088</link>
<description>Characterization of Microbial Primary and Secondary Metabolism in the Marine Realm
Geller-McGrath, David Edward
This thesis applies meta-omics data analysis to elucidate the ecological roles of marine microorganisms in diverse habitats and includes the development of new bioinformatics tools to enhance these analyses. In my second chapter, I applied genome mining tools to analyze the gene content and expression of biosynthetic gene clusters (BGCs). The analysis of BGCs through largescale genome mining efforts has identified diverse natural products with potential applications in medicine and biotechnology. Many marine environments, particularly oxygen-depleted water columns and sediments, however, remain under-represented in these studies. Analysis of BGCs in free-living and particle-associated microbial communities along the oxycline water column of the Cariaco Basin, Venezuela, revealed that differences in water column redox potential were associated with microbial lifestyle and the predicted composition and production of secondary metabolites. This experience set the stage for my third chapter, in which I developed MetaPathPredict, a machine learning-based tool for predicting the metabolic potential of bacterial genomes. This tool addresses the lack of computational pipelines for pathway reconstruction that predict the presence of KEGG modules in highly incomplete prokaryotic genomes. MetaPathPredict made robust predictions in highly incomplete bacterial genomes, enabling more accurate reconstruction of their metabolic potential. In my fourth chapter, I performed metagenomic analysis of microbial communities in the hydrothermally-influenced sediments of Guaymas Basin (Gulf of California, Mexico). Previous studies indicated a decline in microbial abundance and diversity with increasing sediment depth. Analysisrevealed a distribution of MAGs dominated by Chloroflexota and Thermoproteota, with diversity decreasing as temperature increased, consistent with a downcore reduction in subsurface biosphere diversity. Specific archaeal MAGs within the Thermoproteota and Hadarchaeota increased in abundance and recruitment of metatranscriptome reads towards deeper, hotter sediments, marking a transition to a specialized deep biosphere. In my fifth chapter, I developed MetaPathPredict-E, a deep learning-powered extension of MetaPathPredict for eukaryotic metabolism predictions. Eukaryotic metabolism is diverse, reflecting varied lifestyles across eukaryotic kingdoms, but the complexity of eukaryotic genomes presents challenges for assembly and annotation. MetaPathPredict-E was trained on diverse eukaryotic genomes and transcriptomes, demonstrating a robust performance on test datasets, thus advancing the study of eukaryotic metabolic potential from environmental samples
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157088</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Small molecule motion within and through organic nanomaterials: an anthology</title>
<link>https://hdl.handle.net/1721.1/157087</link>
<description>Small molecule motion within and through organic nanomaterials: an anthology
Kaser, Sam
This thesis chronicles three distinct projects united by the common theme of small molecule motion in organic materials. Chapter 1 provides a holistic introduction to key background for the main body chapters, Chapters 2-4. The work outlined in the Chapters 2 and 3 was completed under the supervision of Prof. Julia Ortony and pertains to self-assembled small-molecule aramid amphiphile (AA) nanostructures. AAs are noteworthy within the class of self-assembled amphiphile materials because of their unusual mechanical stability borne of strong intermolecular interactions between aramid units. In Chapter 2, I evaluate local conformational dynamics in different chemical domains of an AA nanoribbon through Electron Paramagnetic Resonance (EPR) spectroscopy. These experiments were enabled by co-assembly of AAs with stable nitroxide radical spin labels into the nanoribbon ensemble. Distinct conformational behavior is resolved between domains, and variable temperature studies enable description of each spin label environment through phase transition characterization and activation energy analysis. Chapter 3 explores AA nanostructure morphology in response to pH changes, i.e. ionization-modulated molecular rearrangements. This chapter is divided into Chapters 3A and 3B. In Chapter 3A, the pH dependency of diammonium headgroup AA nanostructures is correlated with aramid backbone flexibility and intermolecular interactions. These diammonium headgroups are also found to exhibit an aggregation-induced pKa drop, which we leverage in Chapter 3 to induce pH responsiveness in a guanidinium headgroup moiety over a physiologically relevant pH range. Finally, Chapter 4 (completed under the supervision of Prof. Zachary Smith) explores molecular transport of CO₂ gas mixtures through a novel guanidinium-functionalized polymer of intrinsic microporosity (PIM-G) membrane. PIM-G shows high permselectivity towards CO₂ over CH₄, N₂, and O₂, and selectivity was further improved by exchanging the polymer’s default Cl− counterion with larger halides. This halide exchange-driven selectivity enhancement occurred without a commensurate drop in CO₂ permeability. This thesis work investigates small molecule motion within novel organic nanomaterials, outlining analytical approaches and structure-property relationships that may be applicable to broad categories of functional materials.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157087</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revealing Structural and Spin-state Dependent Reactivity of Single Atom Catalysts (SACs) with Systematically Improvable Computational Tools</title>
<link>https://hdl.handle.net/1721.1/157086</link>
<description>Revealing Structural and Spin-state Dependent Reactivity of Single Atom Catalysts (SACs) with Systematically Improvable Computational Tools
Jia, Haojun (贾皓钧)
Efficient catalysts are essential for advancing energy conversion and storage technologies, particularly for challenging reactions such as methane-to-methanol conversion and the oxygen reduction reaction (ORR) important for fuel cells. Single-atom catalysts (SACs), particularly doped graphitic catalysts, have emerged as a promising class of materials. SACs combine the advantages of homogeneous and heterogeneous catalysts, offering tunable active sites and scalability. However, understanding the relationship between the structure of SAC active sites and their reactivity remains challenging due to the limitations of experimental characterizations. Computational modeling provides atomic-level insights into SAC active site configurations and the impact of the metal's local environment on their properties and catalytic activity. This thesis presents a combined effort utilizing computational methods to explore the design and optimization of SACs for methane-to-methanol conversion and the ORR.&#13;
&#13;
In this thesis, we use range-separated hybrid density functional theory (DFT) to compare the energetics and structure of the direct metal-coordinating environment in the presence of 2p (i.e., N or O) and 3p (i.e., P or S) dopants and with increasing finite graphene model flake size to mimic differences in local rigidity. While metal–ligand bond lengths in SACs are significantly shorter than those in transition metal complexes, they remain longer than SAC mimic macrocyclic complexes. Consequently, we observe SACs to simultaneously favor the formation of the metal–oxo while also allowing for methanol release. This reactivity is different from what has been observed for large sets of square planar model homogeneous catalysts. Moreover, modulating the coordination environment near single metal sites by means of codopants, we carry out a large-scale virtual high-throughput screening (VHTS) of transition metal (i.e., Mn, Fe, Co, and Ru) SACs codoped with various elements (i.e., N, O, P, and S) in numerous spin and oxidation (i.e., M(II)/M(III)) states for the challenging conversion of methane to methanol. We identify that the ground-state preference is metal- and oxidation-state-dependent. We observe a weak negative correlation between the oxo formation energy (ΔE(oxo)) and the energy of hydrogen atom transfer (ΔE(HAT)), thanks to the high variability in the coordination environment. Therefore, codoped SACs demonstrate flexible tunability that disrupts linear free energy relationships in a manner similar to that of homogeneous catalysts without losing the scalability of heterogeneous catalysts.  Further exploration focuses on codoped Fe and Ru-based SACs for ORR using VHTS and machine learning (ML). The ML models demonstrate superior accuracy in predicting reaction energetics compared to traditional scaling relationships. The findings validate codoping as a powerful strategy for tuning the properties of SACs to achieve enhanced ORR performance. Promising catalyst candidates are proposed for experimental validation, showcasing the potential of SACs in overcoming limitations in catalyst design for challenging reactions and provides valuable insights for the rational design of high-performance ORR catalysts.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157086</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding divergence in marine protistan communities: from strain diversity to basin biogeography</title>
<link>https://hdl.handle.net/1721.1/157085</link>
<description>Decoding divergence in marine protistan communities: from strain diversity to basin biogeography
Krinos Quinn, Arianna Isabella
Protists (microbial eukaryotes) in the global ocean are critical components of primary&#13;
productivity and nutrient recycling. Protists are genetically diverse and have distinctive&#13;
ecological niches based on genetically-driven differences in physiological fitness. A deeper&#13;
understanding of which dimensions of protistan genetic diversity translate to measurable phenotypic variation is needed to predict the impact of protists on marine biogeochemistry and&#13;
protists’ environmental change sensitivity. I cultured twelve strains of the coccolithophore&#13;
Gephyrocapsa huxleyi across temperatures, which revealed strain-specific differences in thermal optima and niche widths. I used traits measured during the experiments to design&#13;
a Darwin ecosystem model simulation, which demonstrated basin-specific biogeography of&#13;
thermal optima and niche widths (Chapter 2). For seven of the twelve strains, I sequenced&#13;
transcriptomes at 3-5 temperatures to assess gene expression variation. Using the RNAseq&#13;
data, I developed a regression modeling approach to identify proteome allocation model parameters. Combining differential expression analysis, gene abundance normalization, and the&#13;
regression model to explore the proteome allocation model parameter space, I probed differences in modeled strategies of G. huxleyi strains in response to temperature (Chapter 3).&#13;
Scalable workflows highlight the challenge and promise of meta-omic data to link community&#13;
structure to physiology. I developed a pipeline for metatranscriptome analysis and taxonomic&#13;
annotation to address the lack of tools built specifically for microbial eukaryotes, and created mock communities to assess recovery success in protistan metatranscriptome analysis&#13;
workflows (Chapters 4 and 5). I applied these tools to a three-year metatranscriptomic&#13;
dataset from Cape Cod Bay to investigate a recent emergence of a summer coccolithophore&#13;
population in the 20-year time series, tracking shifts in nutrient physiology to identify potential bottom-up controls (Chapter 6). This dissertation advances approaches to constrain&#13;
the protistan taxonomic diversity that underlies shifts in global primary productivity and&#13;
nutrient turnover. Specifically, strains of a single phytoplankton species revealed diversity&#13;
relevant to a global ecosystem model. Future work will clarify variability in protistan gene&#13;
content and expression that may underpin both protists’ present ecological niches and their&#13;
future climate change response.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157085</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dimers, Trimers and their Superpositions in a Bose-Fermi Mixture</title>
<link>https://hdl.handle.net/1721.1/157084</link>
<description>Dimers, Trimers and their Superpositions in a Bose-Fermi Mixture
Chuang, Alexander
This thesis describes experiments on few- and many-body bound states in a Bose-Fermi&#13;
mixture of ultracold 23Na and 40K atoms. We examine the formation of dimers and trimers in&#13;
a balanced, thermal mixture and their evolution into strongly interacting Bose polarons with&#13;
hybridized dimer and trimer character when we instead immerse an impurity concentration&#13;
of K into a dense quantum bath of Na.&#13;
We report a novel direct observation of a heteronuclear halo trimer, consisting of two&#13;
lighter Na atoms and one heavier K atom, alongside the familiar NaK Feshbach dimer, using&#13;
radiofrequency (rf) spectroscopy. We find that in proximity to a Feshbach resonance, the&#13;
trimer feature closely follows the dimer resonance across an order-of-magnitude variation&#13;
in binding energy. We show that the measured binding energies are consistent with our&#13;
theoretical model of the trimer as having the structure of a Feshbach dimer weakly bound&#13;
to one additional boson.&#13;
We then study the fate of impurities interacting with a bosonic quantum bath, the&#13;
paradigmatic Bose polaron scenario. By preparing an initial attractive polaron state, we&#13;
probe previously inaccessible, highly-correlated Bose polaron states, again on the repulsive&#13;
side of the Feshbach resonance. Deep within the condensate, the rf spectra no longer exhibit&#13;
discrete dimer and trimer features as before, instead dominated by a single broad feature.&#13;
We attribute this to the impurity-boson coupling becoming stronger than the dimer-trimer&#13;
energy splitting, leading to hybridization of dimer and trimer states and, consequently, an effective level repulsion consistent with the spectra we observe. This experiment demonstrates&#13;
the remarkable interplay between polaron physics and bound-state formation in a quantum&#13;
environment.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157084</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of Central Bank Real Estate Purchases on Asset Prices</title>
<link>https://hdl.handle.net/1721.1/157083</link>
<description>Impact of Central Bank Real Estate Purchases on Asset Prices
Batista, Quentin
This paper estimates the impact of central bank real estate purchases on asset prices, demonstrating an increase of 0.1% to 0.2% of Real Estate Investment Trust (REIT) prices in the hours following a typical intervention of 0.014% of market capitalization. At longer horizons, the purchases do not appear to have a significant aggregate effect. The primary identification strategy exploits the nature of the Bank of Japan’s (BoJ) policy rule, which triggers purchases when the Tokyo Stock Exchange Real Estate Investment Trust index falls below a certain threshold. Alternative research designs that exploit the counter-cyclical nature of the BoJ’s policy rule and cross-sectional variation in the eligibility of REITs for BoJ purchases are also considered. Overall, these findings are inconsistent with the predictions of canonical and recent models of asset pricing.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157083</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lessons From President Moon Jae In’s Housing Policy and The Road to Affordable Home Ownership in Seoul, South Korea</title>
<link>https://hdl.handle.net/1721.1/157082</link>
<description>Lessons From President Moon Jae In’s Housing Policy and The Road to Affordable Home Ownership in Seoul, South Korea
Cho, Kibong
A fundamental goal of housing policy is to provide a safe and quality place to live for the population. This thesis studies the provision of affordable homeownership in Seoul, South Korea and particularly for non-homeowners and first-time buyers who did not have an opportunity to participate in the housing boom that the previous generations experienced. For Seoul, 58% of the population is non-homeowners. First, this thesis provides a brief introduction to the Korean housing history. Second, it discusses the housing policy under President Moon Jae In, and how housing prices soared under his administration due to misguided efforts. Finally, it describes the necessary path towards mitigating the housing affordability crisis that has been created in Seoul using both supply and demand side arguments.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157082</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Case Study in Marketing a Real Estate Debt Fund through the Design and Preparation of a Private Placement Memorandum (PPM) and Investor Presentation</title>
<link>https://hdl.handle.net/1721.1/157081</link>
<description>A Case Study in Marketing a Real Estate Debt Fund through the Design and Preparation of a Private Placement Memorandum (PPM) and Investor Presentation
Poirier, Richard Scott
Private equity-backed real estate debt funds play a crucial role in providing capital to borrowers seeking financing for construction projects. These funds raise capital from investors, deploy it strategically, and actively manage debt investments to generate returns for their limited partners. The appeal lies in the potential for attractive yields and risk management strategies in a complex investment landscape. There are countless potential fund structures to address a range of investment strategies, risk profiles, investor appetites, geographic considerations, and manager experience and deal access. This study delves into the dynamics of capital raising for a real estate debt fund specializing in private construction loans. It covers the essential elements of the Private Placement Memorandum (PPM), including legal disclosures, investment terms, risk factors, and fundspecific details. This research aims to provide a real-world example of a fund designed according to current trends and market terms for use by a real-life investment manager, ProBuilder Financial LLC. The PPM and the associated investor presentation utilize best practices for presenting complex financial information in a clear and concise manner. Bridging theory and practice sheds light on the strategies, risk-reward trade-offs, and market implications associated with this capital-raising channel.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157081</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular characterization of microbial interactions with labile&#13;
dissolved organic matter</title>
<link>https://hdl.handle.net/1721.1/157080</link>
<description>Molecular characterization of microbial interactions with labile&#13;
dissolved organic matter
Halloran, Kathryn H.
Marine microbes produce and consume labile dissolved organic matter (DOM), generating a carbon flux with significant implications for global carbon cycling and microbial ecosystems. Intracellular measurements of biological activity cannot fully capture microbial interactions with dissolved carbon. Better understanding this carbon flux thus requires direct and compound-specific characterization of metabolites, the small organic biomolecules that make up labile DOM. However, these measurements are challenging due to low metabolite concentrations, high ambient salt concentrations, and the complexity of labile DOM. More complete characterization of dissolved metabolites is therefore a standing challenge in the field. This in turn leaves many open questions with respect to the specificity of microbe-DOM interactions and the biotic and abiotic drivers of those interactions. This thesis addresses those challenges and questions. In Chapter 2, I explore the compound-specific uptake of metabolites by the copiotrophic gamma-proteobacterium Alteromonas macleodii, with a focus on metabolites derived from the cyanobacteria Prochlorococcus. I find that Alteromonas grows on 3-methyl-2- oxobutanoic acid, a valine intermediate, but not on the other cognate branched chain amino acid intermediates. This substrate selectivity is likely driven by transporter specificity. The distinct fate of these structurally similar molecules emphasizes the importance of compound-specific characterization of labile DOM. To expand our ability to make these compound-specific measurements, in Chapter 3 I develop a method for derivatizing carboxylate-, carbonyl-, and phosphate-containing molecules via aniline derivatization, solid phase extraction, and liquid chromatography-tandem mass spectrometry (LC-MS/MS). This method is able to quantify 51 different metabolites dissolved in seawater, 25 of which could not be detected previously, with pM to nM detection limits. I verify the utility of this method by applying aniline derivatization to phytoplankton culture filtrates and field samples. Additionally, I show that where dissolved metabolites can be quantified by multiple methods, the measurements obtained by aniline derivatization are in good agreement with measurements yielded by other methods. Finally, in Chapter 4 I use aniline derivatization to characterize the diel variability of labile DOM produced by phototrophic microbes. Here, I apply aniline derivatization to filtrate from cultures of Prochlorococcus grown under 24-hour diel light/dark conditions and sampled every two hours. I find that Prochlorococcus cells not only release metabolites into solution, but also take those metabolites up again, with diel rhythmicity. Together, this thesis shows that microbe-DOM interactions can be remarkably subtle and complex; expands our ability to quantify the metabolites that make up labile DOM; and demonstrates the importance of directly quantifying these dissolved metabolites to fully characterize microbial ecology and carbon cycling in the ocean.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157080</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural insights into microbial one-carbon metabolic enzymes: Ni–Fe–S-dependent carbon monoxide dehydrogenases and acetyl-CoA synthases</title>
<link>https://hdl.handle.net/1721.1/157079</link>
<description>Structural insights into microbial one-carbon metabolic enzymes: Ni–Fe–S-dependent carbon monoxide dehydrogenases and acetyl-CoA synthases
Biester, Alison
Carbon monoxide dehydrogenase (CODH) and acetyl-CoA synthase (ACS) enzymes play crucial rules in the global carbon cycle by catalyzing reversible carbon dioxide reduction and reversible acetyl-CoA synthesis, respectively. In some cases, CODHs are monofunctional, whereas in other cases CODHs form complexes with ACSs and their catalysis is coupled through an internal gas channel between the CODH and ACS active sites. These carbon-fixing enzymes are thought to be among the oldest on earth, dating back to the last universal common ancestor based on strong conservation of these enzymes between bacterial and archaeal domains of life. In this thesis, we present structural characterizations of bacterial and archaeal CODHs. Using xenon pressurization, we elucidate gas channel paths in a monofunctional CODH from bacteria through crystallographic studies. This structure provides the first experimental visualization of gas channels in a monofunctional CODH. We compare monofunctional CODH gas channels to the gas channels observed in bacterial CODH/ACS complexes and find monofunctional CODH gas channels are highly branched compared to in CODH/ACS complexes, wherein the specificity of the gas channel path is important for active site coupling. In methanogens, CODH and ACS catalysis are coupled, but a complex between these two enzymes was never previously visualized. The methanogenic CODH/ACS complex has been particularly mysterious because the methanogenic ACS lacks the domain that binds CODH in acetogens. In this work, we use cryogenic electron microscopy to capture the first-ever snapshot of an archaeal CODH/ACS complex. We observe a hydrophobic cavity between the CODH and ACS active sites that is rerouted relative to bacterial CODH/ACSs but conserved with a channel path in the monofunctional CODH. In another cryogenic electron microscopy structure of the archaeal CODH alone, we see that this hydrophobic cavity becomes plugged such that CO cannot leave CODH unless ACS is bound. This channel plugging mechanism is conserved with the channel plugging mechanism observed in the acetogenic CODH/ACS complex. This work advances our understanding of how CO is carried to and between active sites in CODH and ACS, and elucidates intriguing similarities between CODH/ACS complexes in acetogens and methanogens.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157079</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental and Computational Advancements in Peptidomimetic Ligand Discovery</title>
<link>https://hdl.handle.net/1721.1/157078</link>
<description>Experimental and Computational Advancements in Peptidomimetic Ligand Discovery
Lee, Michael Alan
The usage of peptides as therapeutics is a growing area of interest within the pharmaceutical industry for the facilitation of protein-protein interactions (PPIs). Peptides inhabit a unique therapeutic space because of their high levels of chemical customization balanced with their potential for high specificity due to a wide variety of potential structures. At the same time, discovery tools for finding peptides that modify PPIs have evolved, including advances in affinity selection techniques and combinatorial chemistry. Specifically, the usage of solid phase peptide synthesis for split-and-pool chemistry allows for rapid access to highly diverse (&gt;108 total sequences) compound libraries for use in ligand discovery. A primary technique for in vitro ligand discovery is affinity selection-mass spectrometry (AS-MS), which utilizes tandem mass spectrometry to decode complex mixtures of peptide ligands pulled down from a peptide library through affinity selection. This approach provides unique advantages due to the high levels of chemical customization that can be performed on synthetic peptide libraries, including the incorporation of unnatural amino acids or the modification of library structure through macrocyclization.&#13;
This thesis will focus on the development of experimental and computational tools to analyze affinity selection datasets more efficiently and thoroughly. We demonstrate the synthesis of macrocyclic peptide libraries that increases the diversity of synthetic macrocyclic libraries while utilizing accessible, efficient chemistry for cyclization. These libraries are then used for the discovery of novel ligands to two proteins. Structure activity relationships are established for one of these ligands and its affinity is matured through the usage of focused libraries containing a variety of unnatural amino acids. Additionally, we investigate a variety of resins used for solid phase peptide synthesis, particularly in the synthesis of small domain proteins or difficult peptide sequences.&#13;
Because of the high amounts of peptides synthesized and pulled down by AS- MS experiments, efficient computational methods are crucial for effective ligand discovery efforts. Here, we discuss two methods of expanding data analysis, first by a sequence-independent enrichment quantification. AS-MS experiments operate using the decoded peptide sequence from tandem MS/MS data to nominate potential hit peptides, but that process depends on the efficient fragmentation of a&#13;
4&#13;
target peptide and the quality of the subsequent MS2 spectrum. We utilize techniques to identify putative hits through the comparison of peptide enrichment based only off mass-to-charge ratio without an assigned sequence, allowing for label free MS1 quantification. The second method utilizes machine learning techniques to rationalize trends in successfully sequenced peptide sequences from AS-MS experiments with respect to target proteins. This approach allows for the creation of a ligand “sequence space”, which allows for the incorporation of unnatural amino acids in ligand discovery.&#13;
Overall, this thesis presents a variety of methods to enhance the scope of peptide-based drug discovery. We anticipate this work to accelerate the process of drug discovery through a diversification of peptide structure combined with more powerful computational analytics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157078</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Airline operating cost reduction through enhanced engine health analytics</title>
<link>https://hdl.handle.net/1721.1/119307.2</link>
<description>Airline operating cost reduction through enhanced engine health analytics
Luu, Henry H. T
Engine Health Management (EHM) is a comprehensive maintenance service offered by engine manufacturer Pratt &amp; Whitney (PW) to its airline customers. In its current form, engine performance is monitored through recorded physical metrics, such as gas temperature, pressure, and altitude, taken as single snapshots at various phases of flight. The advent of the Enhanced Flight Data Acquisition, Storage and Transmission (eFASTTM) system, which allows for near-continuous recording of engine metrics, provides Full-Flight Data Analytics (FFDA) that may proactively alert and recommend maintenance activity to airlines. Adopting eFASTTM may help avoid Adverse Operational Events (AOE) caused by unexpected engine failures and the associated cost burdens. With respect to operating cost, airlines standardly report Cost Per Available Seat Mile (CASM) and Cost Per Block Hour (CBH). EHM services that prevent operational disruptions can help airlines reduce these unit-cost metrics, whose scrutiny by industry analysts affect investment guidance, stock performance, and overall business outlook. In this study, the value of FFDA services to airlines is investigated on the International Aero Engines V2500, a mature engine with customers' operational histories well-documented. Using a Poisson distribution to model the occurrence of six operational disruption types-Inflight Shutdown, Aircraft-On-Ground, Aborted Takeoff, Air Turn-Back, Ground Turn-Back, and Delay/Cancellation-the cost savings potential is quantified as a function of events avoided by a hypothetical FFDA service. Airline Form 41 financial data from the Bureau of Transportation Statistics is then used to estimate the magnitude of savings on CASM and CBH retroactively for 2012-16. Results show that unit cost reductions of 0.5% to 1.5% are possible through engine event avoidance, representing savings up to $104M annually, but outcomes are highly dependent on assumptions about cost of operational disruptions for each individual carrier. Overall, a baseline model and procedure is developed for valuating FFDA and associated EHM services. Further collaboration between airlines and Pratt &amp; Whitney on data availability and accuracy will help refine this model, which is the first to bridge publicly available airline costs with engine history data, helping stakeholders transition to an eFASTTM ecosystem that promises greater operational efficiency and safety.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119307.2</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross Section Measurement of Exclusive ϕ-Meson Electroproduction off the Proton at CLAS12</title>
<link>https://hdl.handle.net/1721.1/157071</link>
<description>Cross Section Measurement of Exclusive ϕ-Meson Electroproduction off the Proton at CLAS12
Moran, Patrick
This analysis studies the exclusive ϕ meson electroproduction process ep → e′p′ϕ at CLAS12 in the kinematic region 0.39 ≤ Q² ≤ 8.38 GeV², 1.97 ≤ W ≤ 4.03 GeV, and 0.17 ≤ −t ≤ 7.26GeV². Cross section σ(Q²,W) and differential cross section dσ/dt (Q²,W,t) measurements are reported. The scaling of the overall cross section was determined to be 1/Q⁶˙⁴⁷⁺⁻⁰˙⁹⁷, which is consistent with the Generalized Parton Distribution (GPD) prediction of 1/Q⁶. The ratios of the longitudinal and transverse cross sections, R = σL / σT , are extracted from the angular decay distributions for four values of Q² and are found to be consistent with the GPD scaling prediction. The mean-square gluonic radius of the proton ⟨b²⟩ subscript g is extracted from the t-dependence of the differential cross sections dσ / dt in the kinematic region 0.12 ≤ x subscript B ≤ 0.39, the first such measurement in the valence regime.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157071</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Synthesis of Stimuli-Responsive Polymers with Programmable Cleavability</title>
<link>https://hdl.handle.net/1721.1/157070</link>
<description>Design and Synthesis of Stimuli-Responsive Polymers with Programmable Cleavability
Zafar, Hadiqa
Polymers comprise a large portion of modern-day materials, from everyday plastics that we can hold and use, to nanomaterials imperceptible to the naked eye. Applying synthetic chemistry to impart structural changes to established polymers offers a promising path to introduce novel functionalities for applications ranging from biology to sustainability. In particular, this thesis explores the synthesis, characterization and evaluation of polymeric platforms containing rational incorporation of moieties that can undergo chemical cleavage, effecting enhancements in their design and performance. In the first half, we explore advancements to linker design and controlled release of payloads from molecular bottlebrush polymers. The first chapter introduces bottlebrush polymers as nanocarriers for therapeutics, and provides a detailed literature analysis of the synthetic and architectural developments that have been reported for these constructs, as well as outlooks for the future. The second chapter reports the first synthesis of peptide-containing bivalent bottlebrush (co)polymers (BBPs), featuring caspase-3-cleavable peptides linked to fluorogenic probes that provide a “turnon” signal upon enzymatic cleavage. The impacts of different architectural features of these polymers on enzyme access reveal insights into the interactions of enzymes with BBPs, and provide design criteria for future therapeutic systems leveraging this approach. The third chapter investigates a synergistic approach to treating pancreatic ductal adenocarcinoma (PDAC) with drug-loaded BBPs by leveraging multiple facets of structural modularity, including linker and drug identities and concentration ratios. This mechanism-guided approach to combination therapy is validated with the translation of in vitro studies that identify synergy across axes of both drug release timing and mechanism of action to in vivo validation of enhanced therapeutic efficacy of the combination BBP system. The remaining two chapters are a departure from BBPs, instead introducing a novel approach to cleavable comonomers for improving plastic end-of-life sustainability. The fourth chapter thus provides detailed background on the current plastic waste outlook, vinyl polymers and their synthesis, radical ring-opening polymerization, and current approaches to cleavable comonomers and the end-of-life options they offer commodity polymers. The fifth and final chapter reports the first “mixed” cleavable comonomer approach to degradable polymers towards a polyacrylic acid system optimized for biodegradability. A computational model offers parameters for controlling degradation fragment molecular weight and dispersity that are validated experimentally, and the material performance properties of the homopolymer are retained for its cleavable analog. Overall, this thesis leverages structure-activity relationships of cleavable functionalities in stimuli-responsive polymers, and expands the scope under which they can be utilized during their productive lifetime or processed thereafter.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157070</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifunctional Wireless Gut-Brain Neurotechnology</title>
<link>https://hdl.handle.net/1721.1/157069</link>
<description>Multifunctional Wireless Gut-Brain Neurotechnology
Sahasrabudhe, Atharva
The complexity of the brain is well known, in that it uses specially organized neural circuits to interact with the external world. Besides external stimuli, the brain also receives, integrates, and responds to sensory signals emerging from internal organs of the body through the network of the peripheral nervous system. Although these nerve signals are subliminal and cannot be consciously detected or controlled, they play a profound role in maintaining a homeostatic state. Recent evidence also suggests that interoceptive signals can impact higher level cognitive functions. The anatomical, functional, and molecular details about these brain-body pathways are beginning to be deciphered, but a lot remains to be uncovered. Cutting edge neurobiological tools like optogenetics, chemogenetics, and activity-based sensors have revolutionized studies of the brain. However, application of these methodologies for studies of brain-body circuits is reliant on engineered devices that support these sophisticated functions in peripheral organs too. Studying interoceptive circuits in a causal fashion in behaving animals, thus, requires advanced multifunctional implantable neurotechnologies that can be deployed at multiple sites spanning regions in the brain and the peripheral organ of interest. This thesis aims to bridge this unmet technological need.&#13;
This work presents a collection of advances that overcome thermomechanical constraints of fiber drawing and allow processing of traditionally non-drawable components. These advances yielded multifunctional probes that allow depth specific optical, electrical, and pharmacological probing of neural circuits in the brain, while also being compatible with brain-wide functional magnetic resonance imaging techniques. The same underlying design principles have also made possible fiber-based miniaturized electrochemical probes for performing electrocatalytic reactions in the brain to deliver transient, gaseous neurotransmitters, such as NO, through controlled generation and delivery in-vivo. Finally, wireless microelectronic fibers that combine the scalability and mechanical versatility of thermally drawn polymer fibers with the sophistication of microelectronic chips for organs as diverse as the brain and the gut were developed. This approach produces meters-long continuous fibers that can integrate light sources, electrodes, thermal sensors, and microfluidic channels in a miniature footprint. Paired with custom-fabricated control module, the fibers wirelessly deliver light for optogenetics and transfer data for physiological recording. This technology was validated by modulating the mesolimbic reward pathway in the mouse brain and the anatomically challenging intestinal lumen to demonstrate wireless control of sensory epithelial cells and vagal afferents that guide animal’s feeding and reward behaviors.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157069</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracing RNA biography: in situ transcriptome profiling by novel spatial&#13;
omics technologies</title>
<link>https://hdl.handle.net/1721.1/157068</link>
<description>Tracing RNA biography: in situ transcriptome profiling by novel spatial&#13;
omics technologies
Ren, Jingyi (Rena)
Cell state and function are shaped by the spatiotemporal regulation of gene expression. This intricate pattern of gene expression is, in part, attained through the precise regulation of mRNA: its metabolism, transport, and translation within individual cells across spatial and temporal dimensions. Therefore, it is critical to methodically delineate the spatially resolved post-transcriptional regulations within transcriptomes, studying these events at the single-cell and single-molecule level. This advantage is important for mapping the complex network of transcriptional and post-transcriptional gene regulatory mechanisms inherent in cells and tissues. Moreover, our understanding of RNA translation in diverse cell types and states will be greatly enriched by the examination of spatially resolved protein synthesis patterns at the genomic scale within heterogeneous cells. Presently, the state-of-the-art spatial transcriptomic techniques offer only static snapshots of RNA expression, falling short of capturing RNA dynamics and their controlled translation within subcellular domains. Therefore, our driving question is whether the spatial regulation of multi-staged RNA life cycle influences cellular state and activity. Thus, an unmet need is to develop new methods capable of spatially tracking not only steady-state RNA expression but also their post-transcriptional states. This work is essential in providing a comprehensive picture of spatial RNA dynamics in cellular function and physiology. Filling this gap, I developed a novel in situ sequencing toolbox to study spatiallyresolved post-transcriptional RNA dynamics at the genomic scale in single cells during my PhD studies. My graduate work has led to the development of two novel in situ profiling technologies: (1) TEMPOmap (temporally resolved in situ sequencing and mapping), which resolves nascently-transcribed RNAs in space and time, and (2) RIBOmap (ribosome-bound mRNA mapping), a spatial ribosome profiling method. Utilizing these methods, we were able to holistically profile spatial, temporal and single-molecule information of RNA at the transcriptomic and translational levels in single cells. The main contribution of this work is that we established a specialized spatial transcriptomic toolkit specific for capturing the dynamics of mRNA in situ. Applying these technologies, I’ve profiled spatial, temporal and single-molecule information of RNA and single cells at the transcriptomic and translational levels in a range of biological systems, including iPSCs, primary skin cells and intact brain tissues. Specifically, I’ve focused on quantifying key steps in the mRNA life cycle in their spatial context, including RNA synthesis, nuclear export, translation, cytoplasmic translocation, and degradation. My goal was to better grasp the link between gene function and RNA lifespan at a genomic level across different cell types. Notably, we found that (1) different mRNAs are controlled both post-transcriptionally and translationally, with distinct subcellular localizations within cells; (2) in contrast to the previous belief that RNA dynamics solely depend on the primary sequence, they in fact exhibit diverse dynamic behavior for the same RNA species based on cell states, types, and even tissue regions. In primary skin samples, we noted cell-type-dependent alterations in the rates of RNA synthesis, transport, and degradation. Additionally, the translation level varied across cell types and regions within intact mouse brain tissue.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157068</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Procollagen Folding in Health and Disease</title>
<link>https://hdl.handle.net/1721.1/157067</link>
<description>Procollagen Folding in Health and Disease
Yammine, Kathryn Marie
Procollagen is a large, complex, and in many ways, unusual protein that is ubiquitous in the human body and in all animals. Decades of research have advanced our understanding of how cells fold and secrete this protein, nonetheless, many questions remain concerning procollagen biosynthesis, and how the process can go awry in the case of collagenopathies. Understanding how these mechanisms break down in disease is key to (1) gaining a better fundamental understanding of how these mechanisms function, and (2) developing effective and targeted strategies for disease modifying treatment. In this thesis, we discuss some of the newly appreciated mechanisms involved in procollagen folding in health and disease. In Chapter 2, we explore the molecular basis of procollagen assembly, and uncover a new role for the triple helical domain sequence in guiding trimer assembly. In Chapters 3 and 4, we develop, characterize, and deploy an expandable human cartilage model to examine the processes of procollagen proteostasis that break down in the cases of the chondrodysplasia-inducing Gly1170Ser and Arg719Cys substitutions in procollagen-II, respectively. In Appendix C, we explore the functional differences between two alternatively spliced forms of the procollagen-II N-propeptide and speculate about the role and importance of aspartate hydroxylation in ocular function and homeostasis. Collectively, the work described in this thesis advances our understanding of the molecular mechanisms involved in procollagen proteostasis in health and disease.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157067</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Spin Patterning in Metal-Organic Frameworks</title>
<link>https://hdl.handle.net/1721.1/157066</link>
<description>Chemical Spin Patterning in Metal-Organic Frameworks
Petry, Stephanie Michelle
Emergent phenomena are ubiquitous and fundamental to life as we know it, serving as vital environmental regulators, such as the production of honey by honeybees. These phenomena occur when individual components of a system interact, generating new collective behaviors. In magnets, interactions between electron spins result in emergent properties with profound fundamental and technological implications. As metal-organic-frameworks (MOFs) are highly tailorable materials, this thesis will examine the utilization of MOF platforms to engineer bespoke spin properties. Through deliberate manipulation of magnetic interactions, we engineer custom magnetic materials with unique emergent properties. Our investigation begins with the controlled construction of magnetic interactions in a family of chemically similar but structurally distinct metal-organic materials. Despite sharing the same magnetic components, variations in their structural and magnetic dimensionalities significantly influence their magnetic behaviors. In the following section, we address current experimental challenges in engineering spin frustration within honeycomb lattices. We introduce a novel model for spin frustration on this lattice and employ MOFs to realize this concept. The tailorable nature of this MOF platform facilitates the investigation of how manipulable chemical interactions influence the resulting magnetic properties. The concluding section outlines a synthetic strategy for designing an underexplored magnetic model, leveraging the versatility of MOF synthesis to make new materials from preexisting structures. Highlighting our initial findings, we offer brief insight into the future prospects of this endeavor. These combined studies underscore the remarkable potential of MOF platforms in creating designer magnetic materials, representing significant progress in the field of condensed matter physics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157066</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and dynamics of magnetic domain walls in multi-sublattice magnetic oxides</title>
<link>https://hdl.handle.net/1721.1/157065</link>
<description>Structure and dynamics of magnetic domain walls in multi-sublattice magnetic oxides
Huang, Siying
Spintronics is a study that lies at the intersection of magnetics and electronics, which makes use of the electron spin in solid-state devices for data storage and manipulation. A promising future spintronic technology is racetrack memory, where magnetic domain walls (DWs) are encoded with bits of information and are translated by currents on thin-film racetrack devices. What enables the current-driven motion of the DW is its Néel character, generally stabilized by the Dzyaloshinskii–Moriya interaction (DMI). Fast DW motion of the order of km/s was shown in multi-sublattice metallic systems, overcoming the fundamental limits in ferromagnetic systems through angular momentum compensation of the sublattices. Recently, DMI and even faster DW motion have been observed in thin-film rare-earth iron garnets. However, the net angular momentum in such systems is shown to be far from angular momentum compensation. Moreover, the mechanism of the DMI in garnets is shown to be distinct from the metallic systems, thus requiring further understanding as well. In this thesis, we examine magnetic DWs in such multi-sublattice magnetic oxides. We demonstrate a strong tunability of the DMI by a factor of 7 through the substrate in Pt/garnet thin f ilms, providing further understanding of the DMI mechanism. For the anomalously fast DW motion, we present an explanation by the field-like torque counteracting the damping-like torque and increasing the spin Hall efficiency. We propose measuring the DW velocity with a transverse field applied to probe this field-like torque, and present experimental evidence. We investigated the DW depinning dynamics in Pt/BiYIG thin film, presenting a phase diagram of this pinning event, which proves the crucial role of minimizing the pinning effect in achieving fast DW velocity. In EuIG(110) thin film with strong in-plane anisotropy, we demonstrate bistable Néel DW states interchangeable by in-plane field pulse-driven incoherent DW reversal, from which we extract for the first time the Bloch line energetics. Besides the above DW Néel character stabilization, we also provide a DW position stabilization on the racetracks by the exchange bias effect in Pt/Co/Pt/Co₀.₈Ni₀.₂O thin films. This thesis provides a comprehensive understanding of the stability and dynamics of DWs on racetrack devices based on magnetic oxides, from the aspects of both scientific understanding and technical optimizations, paving a path to future innovation and optimization in racetrack memory device design.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157065</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic and Analytic Methods in Combinatorics</title>
<link>https://hdl.handle.net/1721.1/157064</link>
<description>Probabilistic and Analytic Methods in Combinatorics
Sawhney, Mehtaab
This thesis studies a range of topics across combinatorics, broadly defined. The second chapter of this thesis addresses a longśstanding question of Erdős regarding the existence of high girth Steiner triple systems. The tools employed fall squarely within the context of probabilistic method, drawing on recent advances within design theory and the theory of random processes. The third and fourth chapters of this thesis consider problems within random matrix theory; in particular on problems regarding sparse random graphs. The third chapter concerns a question of Vu regarding the singularity of the k-core of a random graph. In particular, a sparse ErdősśRényi graph G(n,d/n) with high probability has large corank due to the presence of isolated vertices. Answering raised by Vu at the ICM 2014, the third chapter proves that by iteratively deleting vertices of degree less than k (e.g. forming the k-core) we have that the associated graph is nonsingular with high probability. The fourth chapter answers a longstanding question regarding the spectral distribution of a matrix where each entry is 1 with probability d/n. In particular, this result gives the first spectral distribution for nonhermitian random matrices at this level of sparsity and answers a question that was highlighted by Tikhomirov at the ICM 2022. The fifth and sixth chapters are concerned with discrepancy theory. The fifth chapter provides bounds for online vector balancing by finding a Markov chain on R with integer steps and for which the stationary distribution is Gaussian. The sixth chapter concerns a famous result of Spencer in finding a low discrepancy coloring of a set system. This chapter gives the first such algorithm for finding a low discrepancy coloring which runs in nearly input sparsity time. The seventh chapter concerns effective bounds for special cases of the polynomial Szemerédi theorem. In particular, answering a question of Green, this chapter gives effective bounds for sets avoiding the pattern x,x + y² − 1,x + 2(y² − 1) (e.g. Roth’s theorem with a shifted square difference). This is the first polynomial pattern which is not homogeneous and complexity at least one for which effective bounds have been obtained. Furthermore this paper introduces the use of higher order techniques within the context of degree-lowering. The final chapter concerns a question at the intersection of probabilistic combinatorics and statistical physics. This chapter determines the sharp constant γ such that with high 3probability a graph G ∼ G(n,1/2) may be split into two equal parts A and B such that each vertex in A has γ√n more neighbors in A than in B. This provides an essentially complete resolution to a question of Füredi and draws on a combination of methods from graph enumeration and boolean analysis.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157064</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Dynamics on Integrable Lattice Models</title>
<link>https://hdl.handle.net/1721.1/157063</link>
<description>Stochastic Dynamics on Integrable Lattice Models
Nicoletti, Matthew S.
The purpose of this thesis is to present some new results related to the six-vertex and dimer model. One theme is the construction and analysis of Markov processes which are naturally associated to these lattice models. Certain integrability properties of the six-vertex and dimer model, often related to the Yang–Baxter equation, allow for the construction of associated Markov chains. In some cases, these are measure preserving Markov chains on configurations of the lattice model. In other cases, they arise via transfer matrices, after choosing a distinguished time coordinate, as a continuous time degeneration of the “time evolution” of the lattice model itself. It is often the case that the integrability of the underlying lattice model provides powerful tools to study the associated Markov chains or their marginals, which are sometimes Markov chains themselves. In Chapter 2, we construct Markov chains on six-vertex states in the quarter plane Z² ≥₀ and the full plane Z². When viewing the six-vertex model as a model of random surfaces, the Markov chain is an example of a surface growth model in the (2+1)-dimensional “Anisotropic KPZ” (or “AKPZ”) universality class. In the Z² case, the translation invariant Gibbs measures of the stochastic sixvertex model are stationary measures of the process. Using structure preserving local moves for the dimer model, in Chapter 3 we construct another surface growth model in the AKPZ universality class, which has the dimer model Gibbs measures as stationary distributions. By exactly computing key quantities such as the current, we confirm predictions from the physics literature on the AKPZ universality class, and we confirm the expected hydrodynamic limit PDE of the growth process in special domains known as tower graphs. To complement our analysis of the growth process, we analyze the local asymptotics of dimer model correlation functions on tower graphs, and confirm in this case the prediction ([1]) that they converge to those of translation invariant Gibbs measures. In Chapter 4, we construct a Markov chain generalizing domino shuffling which samples exactly from a recently introduced probability measure on tuples of interacting dimer configurations. Exact sampling is extraordinarily useful for the discovery and numerical investigation of asymptotic phenomena in new models. In Chapter 5, we utilize local moves for a different purpose; we construct deterministic tembeddings, which are embeddings of a bipartite graph that are compatible with the underlying 3dimer model. It was recently shown ([2], [3]) that a certain subclass of these, perfect t-embeddings, can be ultimately used to prove “conformal invariance of the model” in the scaling limit. Furthermore, for each local move in the dimer model, there is a corresponding local geometric transformation of t-embeddings ([4]). For Aztec diamond and tower graphs, this allows for an inductive construction of perfect t-embeddings. We utilize the “exact solvability” of the resulting recurrence relations to give exact formulas for the embeddings. We then precisely characterize the global and local asymptotic behavior of the embeddings, and to confirm predictions of [3], [5] in these two cases. In Chapter 6, we utilize the Yang–Baxter equation for a colored generalization of the six-vertex model to compute stationary measures for colored interacting particle systems. In several cases, we match our constructions to existing stationary measures, while in other cases we obtain new stationary measures. We provide a new, unified construction and method of proof (of stationarity) for several different interacting particle systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157063</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The algebraic K-theory of the chromatic filtration and the telescope conjecture</title>
<link>https://hdl.handle.net/1721.1/157062</link>
<description>The algebraic K-theory of the chromatic filtration and the telescope conjecture
Levy, Ishan
We develop tools for understanding the algebraic K-theory of categories such as those coming from the chromatic filtration of the stable homotopy category, and apply these tools to improve our understanding of the large scale structure of stable homotopy theory and understand Ravenel's telescope conjecture.&#13;
&#13;
More specifically, in joint work with Burklund, we prove a general devissage result which in particular identifies the algebraic K-theory of certain coconnective ring spectra satisfying suitable regularity and flatness hypotheses with the K-theory of their pi₀. Using this and an extension of the Dundas--Goodwillie--McCarthy theorem to —1-connective ring spectra, we obtain a formula for the algebraic K-theory of the K(1)-local sphere in terms of topological cyclic homology of a ring spectrum j_zeta, and in particular find that its algebraic K-groups are not all finitely generated. In joint work with Lee, we extend these computations to understand the algebraic K-theory of the K(1)-local sphere in the stable range using THH, where we observe phenomena such as the failure of Zₚ Galois descent for THH for an extension of j_zeta. In joint work with Burklund, Hahn, and Schlank, we show that the failure of Zₚ-descent also happens for the T(2)-local TC of this extension. Combining this with the cyclotomic redshift result of Ben-Moshe--Carmeli--Schlank--Yanovski, this implies that the T(2)-local algebraic K-theory of the K(1)-local sphere is not K(2)-local, and hence a counterexample to the height 2 telescope conjecture. We also  give similar counterexamples to the height n telescope conjecture for all n≥2 and all primes, and show that Zₚ Galois hyperdescent for chromatically localized algebraic K-theory generically fails.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157062</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Chromatin Organization and Dynamics with Coarse-Grained Modeling</title>
<link>https://hdl.handle.net/1721.1/157061</link>
<description>Understanding Chromatin Organization and Dynamics with Coarse-Grained Modeling
Liu, Shuming
The genome is the blueprint of human life, and it is crucial to understand its organization. The genome organization is hierarchical with different principles dominating at different scales. At the near-atomistic level, nucleosomes are organized as ordered chromatin fibers or disordered chromatin arrays. Furthermore, chromatin and related proteins can function within condensate environments. Computational modeling provides valuable insights into such complex biological processes. Considering the complexity of chromatin and biomolecular condensates, coarse-grained (CG) modeling is essential to achieve the biologically relevant timescales. We have developed CG models and toolkits to facilitate modeling chromatin and related proteins. We have also applied CG protein and DNA models to study chromatin folding and phase separation.&#13;
&#13;
In Chapter 1, we begin with an overview of the hierarchical scales of genome organization. We also introduce CG modeling as a powerful tool to understand the chromatin structures and dynamics. In Chapters 2 and 3, we demonstrate the development of CG simulation force fields and toolkits. In Chapter 2, we present novel CG force fields trained with contrastive learning. We have achieved a new set of hydropathy parameters trained with a99SB-disp all-atom force field trajectories of intrinsically disordered proteins, which accurately reproduces their average radius of gyration. In addition, we have developed a unified force field that captures the average radius of gyration of both ordered and disordered proteins in the training set. In the future, we will focus on benchmarking our models and existing CG models with condensate simulations, which enables more appropriate selections of CG models based on specific conditions. In Chapter 3, we introduce OpenABC, a versatile toolkit designed to streamline the setup of CG simulations, especially condensate simulations. OpenABC incorporates diverse CG force fields within an extensible framework and is built on a simulation platform that supports GPU acceleration, thus speeding up CG simulations. &#13;
&#13;
In Chapters 4 and 5, we shift our focus to the applications of CG simulations. In Chapter 4, we discuss the force extension and inter-chain contacts of chromatin fibers. Our CG simulations reveal that the chromatin fiber behaves like an elastic spring under forces no more than 3 pN, while it dramatically unstacks and unwraps at approximately 4 pN. Meanwhile, inter-chain contacts can help unfold the native two-start fibril-like structures. The study demonstrates that biologically relevant pN-level forces and crowding environments contribute to the absence of 30-nm fibers in vivo. In Chapter 5, we apply Markov state models and non-Markovian dynamics models to study the folding dynamics of tetra-nucleosomes. The tetra-nucleosome with 10n+5-bp linkers shows more diverse structures without dominant native structures, while 10n-bp linkers lead to funnel-shaped free energy landscape with a strong folding trend. Within the condensate, the transition rates slow down, while the unfolding and folding rates are comparable. These two studies highlight that the intrinsic physical chemistry properties of chromatin are fundamental to the genome organization in cells.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157061</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring Metal–Organic Frameworks for Water Harvesting</title>
<link>https://hdl.handle.net/1721.1/157060</link>
<description>Tailoring Metal–Organic Frameworks for Water Harvesting
Oppenheim, Julius Jacob
Water sorbents enable technologies that have the potential to mediate water insecurity, meet an increasing energy demand, and push towards sustainability. Metal-organic frameworks (MOFs) are candidate sorbents for such technologies as a direct result of their inherent chemical modularity – facilitating the use of MOF sorbents to adsorb over a large range of relative humidity (RH). However, the underlying structure–function relationships between MOF composition and structure with sorption properties have yet to be explicitly determined. In this thesis, the author explores and defines such structure–function relationships. Chapter 1 introduces the important sorption properties as well as the top performing MOFs and MOF families. In Chapter 2, the author presents a derivation for a relationship between pore composition and the observable sorption parameters (critical RH, maximum gravimetric capacity, and presence of hysteresis loops). Chapter 3 realizes the insights from the preceding chapter to design and synthesize an industrially viable sorbent with high capacity below 30% RH and excellent cycling stability. Chapter 4 further explores these insights, with a focus on the observation that ions contained within the framework pore can greatly increase the hydrophilicity of a framework. Within Chapter 5, the author investigates the relationship between pore hydrophilicity and kinetic hysteresis, finding that kinetic limitations arise in sufficiently hydrophilic frameworks. Chapter 6 explores the driving differences in interaction between a framework and π-backbonding sorbates, for a framework in which the water sorption properties have been previously reported. Within Chapter 7, the author explores an alternative method for post-synthetic modification, whereby chlorine radical abstraction is utilized to reduce a framework, which may be useful for the synthesis of new sorbents.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157060</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Studies of Interfacial Proton-Coupled Electron Transfer to Molecularly Defined Surface Sites</title>
<link>https://hdl.handle.net/1721.1/157059</link>
<description>Mechanistic Studies of Interfacial Proton-Coupled Electron Transfer to Molecularly Defined Surface Sites
Lewis, Noah B.
Nearly all electrocatalytic reactions and all aqueous electrochemical systems involve either the reductive formation or oxidative scission of a surface-hydrogen bond in an interfacial proton-coupled electron transfer (I PCET) reaction. Whether as an intermediate step in electrocatalysis or a stoichiometric charging step in a psuedocapacitor, regardless if involved in product formation or solution degradation, I PCET can occur in any protic electrolyte. Despite I-PCET reactions’ integral role in electrochemical energy storage and value-added chemical synthesis, molecular-scale models for I-PCET mechanisms have historically been lacking. The general heterogeneity and dynamism of electrode surfaces make it difficult to identify relevant surface active sites and therefore nearly impossible to correctly assign changes in reactivity between surface-based or electrolyte-based effects. In contrast to standard heterogeneous surfaces, graphite-conjugated carboxylate (GC COOH) electrodes display stable, isolated, unique, and atomically precise active sites. Investigating I-PCET at GC-COOH electrodes therefore introduces unprecedented clarity into the chemical nature of surface-H bonds and eliminates convolution from differences in electrode structure between electrolyte conditions. This thesis utilizes GC-COOH electrodes to explore how two fundamental electrolyte properties, pH and ionic strength, control I PCET kinetics with an understanding of both properties’ kinetic dependence leading to new mechanistic insights for I PCET reactivity.&#13;
 &#13;
Chapter 2 concerns how I-PCET kinetics are controlled by electrolyte pH and how the observed rate dependence informs I-PCET mechanisms. Equilibrium apparent rate constants (kapp) for I-PCET were measured to be fastest at both pH extremes but reach a minimum at pH 10. The lack of pH-independent regions and the asymmetric slopes of the “V”-shaped kapp vs pH dependence observed for I-PCET stand in stark contrast to the established rate-pH dependence and path-dependent mechanism established for outer-sphere proton-coupled electron transfer. Such differences highlight the need for an alternative mechanistic model for I-PCET. With these observations, a donor-identity-dependent model for I-PCET is developed. In this model, I-PCET occurs through one of two proton donor/proton acceptor couples, either a hydronium/water couple predominating at low pH and slowing with increased pH or a water/hydroxide couple predominating at high pH and slowing with decreasing pH. These studies constitute the first molecular-scale mechanistic understanding of elementary I-PCET reactions.  &#13;
&#13;
Chapter 3 investigates how high concentrations of proton-neutral supporting electrolytes effect I-PCET kinetics. We measure proton activity with the reversible hydrogen electrode and I PCET kinetics with GC-COOH from 1 mole kg⁻¹ to 17 mole kg⁻¹ NaClO₄ in unbuffered perchloric acid, acetate buffered, and unbuffered sodium hydroxide aqueous electrolytes. While the proton activity of unbuffered acidic conditions increases drastically across this concentration range, that of the buffered and basic electrolytes changes little. Additionally, a significant decrease in I-PCET rates versus the rate expected for the measured proton activity is observed for the acidic and buffered electrolytes but not the basic electrolytes. With these observations we construct a mechanistic model in which I-PCET is not a single step, but a multi-step reaction sequence in which elementary I-PCET is gated by an ion exchange reaction between proton donor/acceptor species and proton-neutral supporting electrolyte at the electrode-electrolyte interface. These findings demonstrate how supporting electrolyte can be leveraged as a design parameter to independently control electrolyte pH and the rates of I-PCET-based reactions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157059</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Characterization of Iron-Sulfur Cluster Excited States and their Relevance to Electron Transfer Reactions</title>
<link>https://hdl.handle.net/1721.1/157058</link>
<description>Experimental Characterization of Iron-Sulfur Cluster Excited States and their Relevance to Electron Transfer Reactions
Skeel, Brighton A.
Iron-sulfur (Fe−S) clusters, and cuboidal [Fe₄S₄] clusters in particular, are biologically ubiquitous metallocofactors involved in a diverse array of cellular processes. The electronic structures of these metallocofactors are highly multiconfigurational, and characterized by dense manifolds of low-energy excited states. The ground states for these systems have been extensively studied for several decades, and are understood to be the products of a confluence of super-exchange and spin-dependent electron delocalization interactions. On the other hand, our understanding of the excited states of these clusters—many of which are measurably populated at ambient temperature—is minimal, due largely to the fact that describing these states both experimentally and computationally is a daunting task. Here, we simplify this problem first by recognizing that imposing a particular ligand field symmetry (namely 3:1 site differentiation) on a [Fe₄S₄] cluster causes some of its excited states to become degenerate in well-defined ways. This in mind, we describe the synthesis of an array of 3:1 site-differentiated [Fe₄S₄]¹⁺ clusters, and characterize them by variable temperature (VT) solution NMR spectroscopy and magnetometry. Our global fits of these VT data using a simplified model Hamiltonian have furnished, for the first time, experimental pictures of the excited state manifolds for [Fe₄S₄]¹⁺ clusters, including both their low-energy spin states and alternate valence electron configurations (“valence isomers”). We find that the energy scale associated with both of these phenomena is commensurate with that of the thermal energy at ambient temperature, and that these alternate valence arrangements and spin configurations are thus relevant to understanding the room temperature reactivity of biological Fe−S systems. We find additionally that the primary coordination sphere has a strong influence on the topography of these excited state landscapes, in particular that the donor properties of the ligands binding an [Fe₄S₄]¹⁺ cluster determine its ground state valence electron distribution. Finally, we describe the variable temperature electron transfer self-exchange kinetics for a series of [Fe₄S₄]¹⁺/²⁺ clusters where we have experimentally mapped the excited spin state manifolds, thus taking the first steps toward connecting the excited state manifolds of these metallocofactors to their reactivities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157058</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Positive traces and analytic Langlands correspondence</title>
<link>https://hdl.handle.net/1721.1/157057</link>
<description>Positive traces and analytic Langlands correspondence
Klyuev, Daniil
I will describe my results with co-authors in two directions. &#13;
&#13;
The first direction is the problem of classification of positive traces on quantized Coulomb branches. In the most general setting, this problem generalizes the classical problem of describing irreducible unitary representations of real reductive Lie groups. We consider the case of Kleinian singularities of type A and provide a complete classification of positive traces.&#13;
&#13;
The second direction is analytic Langlands correspondence, which is the following. Let X be a smooth irreducible projective curve over C, G be a complex reductive group. On one side of this conjectural correspondence there are G superscript v -opers on X satisfying a certain topological condition ( real opers), where G superscript v is Langlands dual group. On the other side there is joint spectrum of certain operators on L²(Bun subscript G), called Hecke operators, where Bun subscript G is the variety of stable parabolic G-bundles on X and L²(Bun subscript G) is a Hilbert space of square-integrable half-densities. We prove most of the main conjectures of analytic Langlands correspondence in the case when G=PGL₂(C) and X either a genus one curve with points or X is P¹ with higher structures at points.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157057</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kazhdan-Laumon Categories and Representations</title>
<link>https://hdl.handle.net/1721.1/157056</link>
<description>Kazhdan-Laumon Categories and Representations
Morton-Ferguson, Calder
In 1988, D. Kazhdan and G. Laumon constructed the Kazhdan-Laumon category, an abelian category A associated to a reductive group G over a finite field, with the aim of using it to construct discrete series representations of the finite Chevalley group G(F subscript q). The welldefinedness of their construction depended on their conjecture that this category has finite cohomological dimension. This was disproven by R. Bezrukavnikov and A. Polishchuk in 2001, who found a counterexample for G = SL₃. Since the early 2000s, there has been little activity in the study of Kazhdan-Laumon categories, despite them being beautiful objects with many interesting properties related to the representation theory of G and the geometry of the basic affine space G/U. In the first part of this thesis, we conduct an in-depth study of Kazhdan-Laumon categories from a modern perspective. We first define and study an analogue of the Bernstein-Gelfand-Gelfand Category O for Kazhdan-Laumon categories and study its combinatorics, establishing connections to Braverman-Kazhdan’s Schwartz space on the basic affine space and the semi-infinite flag variety. We then study the braid group action on D superscript b (G/U) (the main ingredient in Kazhdan and Laumon’s construction) and show that it categorifies the algebra of braids and ties, an algebra previously studied in knot theory; we then use this to provide conceptual and geometric proofs of new results about this algebra. After Bezrukavnikov and Polishchuk’s counterexample to Kazhdan and Laumon’s original conjecture, Polishchuk made an alternative conjecture: though the counterexample shows that the Grothendieck group K₀(A) is not spanned by objects of finite projective dimension, he noted that a graded version of K₀(A) can be thought of as a module over Laurent polynomials and conjectured that a certain localization of this module is generated by objects of finite projective dimension. He suggested that this conjecture could lead toward an alternate proof that Kazhdan and Laumon’s construction is well-defined, and he proved this conjecture in Types A₁,A₂,A₃, and B₂. We prove Polishchuk’s conjecture for all types and prove that Kazhdan and Laumon’s construction is indeed well-defined, giving a new geometric construction of discrete series representations of G(F subscript q).
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157056</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fifty Million Dollar Piece of Dirt: Somerville as a Case Study in Development</title>
<link>https://hdl.handle.net/1721.1/157055</link>
<description>A Fifty Million Dollar Piece of Dirt: Somerville as a Case Study in Development
Aizman, Asya
In May, 2023, the City of Somerville achieved the highest S&amp;P Global Ratings AAA credit rating. The accompanying report, citing one gentrifying neighborhood as a “notable contributor to increased market value,” signaled the city’s “attractiveness” to potential investors by promising low interest rates on local real estate development projects. But while the city increasingly appeared to be a sure bet for investors, life became more strenuous for residents, with steep and climbing rents, failing infrastructure, and fewer reasons to stay in a changing city that they no longer recognized. This is a case study of twenty years in Somerville real estate development, spanning 2004 to 2024. Through interviews with residents, activists, and senior city officials, I present a story of a city attempting to rectify its progressive values with the forces of neoliberalism, which it seems unable—and unwilling—to stop.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157055</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The practical estimation of the value of fanning materials with points on fanning</title>
<link>https://hdl.handle.net/1721.1/157054</link>
<description>The practical estimation of the value of fanning materials with points on fanning
Nickerson, Wm. E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157054</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An investigation of the action of chloride of sulphur upon spirits of turpentine</title>
<link>https://hdl.handle.net/1721.1/157053</link>
<description>An investigation of the action of chloride of sulphur upon spirits of turpentine
Waite, Charles N.; Low, Albert Howard, 1855-1936.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157053</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of oxygen in organic bodies</title>
<link>https://hdl.handle.net/1721.1/157052</link>
<description>Determination of oxygen in organic bodies
Fish, Chas. C. R.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157052</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An economic study of the A. B. C. Traction Company</title>
<link>https://hdl.handle.net/1721.1/157051</link>
<description>An economic study of the A. B. C. Traction Company
Bain, L. D.; Estill, Harry, F. 1861-1942.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1924
</description>
<pubDate>Tue, 01 Jan 1924 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157051</guid>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear analysis of the pressoreceptor reflex system</title>
<link>https://hdl.handle.net/1721.1/157050</link>
<description>Nonlinear analysis of the pressoreceptor reflex system
Levison, William H.
            (William Henry),
            1936-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Vita.; Includes bibliographical references (leaves 166-170).
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157050</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Leningrad Physico-Technical Institute and the birth of Russian physics</title>
<link>https://hdl.handle.net/1721.1/157049</link>
<description>The Leningrad Physico-Technical Institute and the birth of Russian physics
Josephson, Paul Robert.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Political Science, 1987; Bibliography: leaves 461-477.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157049</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A measure of the period and amplitude of the variations of AD Canis Minoris</title>
<link>https://hdl.handle.net/1721.1/157048</link>
<description>A measure of the period and amplitude of the variations of AD Canis Minoris
Martin, David W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1984; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157048</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A microprocessor driven liquid crystal graphics display for aircraft use</title>
<link>https://hdl.handle.net/1721.1/157047</link>
<description>A microprocessor driven liquid crystal graphics display for aircraft use
Marzke, Lee Howard.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Bibliography: leaf 35.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157047</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric nonlinearities in guyed towers</title>
<link>https://hdl.handle.net/1721.1/157046</link>
<description>Geometric nonlinearities in guyed towers
McClure, Ghyslaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1984; Vita.; Bibliography: leaves 110-114.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157046</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The use of transpiration in a precision temperature-controlled enclosure</title>
<link>https://hdl.handle.net/1721.1/157045</link>
<description>The use of transpiration in a precision temperature-controlled enclosure
Mastanduno, Richard Thomas.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaf 30.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157045</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-destructive evaluation of resistance seam welds by acoustic emission</title>
<link>https://hdl.handle.net/1721.1/157044</link>
<description>Non-destructive evaluation of resistance seam welds by acoustic emission
Markey, Karl R.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaf 45.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157044</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of techniques to measure the low wavenumber wall pressure spectrum of a turbulent boundary layer</title>
<link>https://hdl.handle.net/1721.1/157043</link>
<description>Comparison of techniques to measure the low wavenumber wall pressure spectrum of a turbulent boundary layer
Martini, Kyle F.
Thesis: Mech. E., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157043</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Action of tungstic acid upon gelatin</title>
<link>https://hdl.handle.net/1721.1/157042</link>
<description>Action of tungstic acid upon gelatin
Atwood, Wm. P.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157042</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing United States Energy Poverty Policy: Regulatory Design Alternatives and Resource Allocation</title>
<link>https://hdl.handle.net/1721.1/157037</link>
<description>Assessing United States Energy Poverty Policy: Regulatory Design Alternatives and Resource Allocation
Heller, Peter J.
Guaranteeing sufficient and affordable access to energy services is increasingly critical as climate change continues to worsen, energy costs increase due to the need to meet decarbonization goals, and the trend in general inequality among citizens persists. To ensure the affordability of energy services, in this thesis, I analyze the design of policies and programs addressing energy poverty according to the four strategy decisions that I argue must be made during their ideation: assistance, targeting, funding, and governance. I focus on the strategies designed and implemented in the US and the EU and discuss the benefits and disadvantages of the different approaches followed in both contexts. Based on this comparative analysis, I find there are changes to US federal policy design that should be implemented to better serve households living in energy poverty. Specifically, current allocations under the US Low Income Home Energy Assistance Program (LIHEAP) to states have been nearly static since 1984, while the distribution of energy poverty is dynamic in location and time. To improve the allocation of federal resources, I produce a novel machine learning approach based on sociodemographic and geographical information to estimate energy burden in each US census tract for 2015 and 2020. This analysis reveals an increase in the average household energy burdens, and the range of households experiencing energy poverty broadened. To improve the targeting strategy of LIHEAP, I design an optimized allocation structure that illustrates a shift in funding to the southern US from northern states. To better match household assistance needs, this analysis urges policy makers to revise the distribution of resources to reflect where concentrations of energy poverty exist in the US.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157037</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>NBA Sleep Tracking Data Imputation</title>
<link>https://hdl.handle.net/1721.1/157036</link>
<description>NBA Sleep Tracking Data Imputation
Licht, Joseph D.
This thesis investigates imputation methods for nights of missing sleep wearable data from NBA Academy athletes. Sparsity in sleep tracking data arises as a result of behavioral non-compliance or device malfunction, hindering the NBA Academy's ability to provide actionable insights that improve player sleep, a crucial component for player development. Motivated by existing work on time series data imputation, four main techniques are evaluated: K-Nearest Neighbors Regression, Linear Interpolation, Linear Regression, and Quadratic Regression. Each technique is applied and evaluated on key sleep metrics such as sleep duration, rMSSD (Root Mean Square of the Successive Differences between Heartbeats), and average heart rate. Results indicate K-Nearest Neighbors Regression and Linear Interpolation, with access to data in the past and future (offline imputation), as the best-performing sleep imputation methods. Furthermore, this thesis utilizes the NBA Academy's shooting and jumping datasets in conjunction with the sleep dataset to explore a relationship between sleep and athletic performance, finding a generally weak correlation between sleep and athletic performance data, regardless of the time lag. This research has applications in all areas of sport and performance as well as in domains where data sparsity is problematic.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157036</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Farm-Scale Water Storage in Morocco: Low-Carbon Design with Parametric FEA Optimization</title>
<link>https://hdl.handle.net/1721.1/157035</link>
<description>Farm-Scale Water Storage in Morocco: Low-Carbon Design with Parametric FEA Optimization
Trézarieu, Raphaël
Morocco faces increasing water scarcity with an anticipated decline in rainfall. Rising temperatures have resulted in drier and denser soil, causing water to be trapped on the surface and evaporate. One solution is to shift water management from large-scale to farm-scale. Underground water reservoirs allow the catchment of sparse rainfall events and the resultant overland flows before their evaporation. This research develops a methodology to design such rectangular reinforced concrete water reservoirs using a parametric approach in Python coupled with a Finite-Element Analysis (FEA) software. The aim is to offer both low embodied carbon and affordable designs, for an individual farmer to build. The first method section is used to identify a small region of the design space containing the Pareto front before running FEA on a limited set of geometries in the second section. In the first section, the global shape of the reservoir and the local structural elements are simultaneously designed using analytical expressions of the Eurocodes on multi-dimensional arrays. One key added value of the method lies in the framework developed to handle numerous arrays of different dimensions, while monitoring the indices of each design variables combinations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157035</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fully Differential Programmable Gain Chiplet for Integrated Data Acquisition Systems</title>
<link>https://hdl.handle.net/1721.1/157034</link>
<description>Fully Differential Programmable Gain Chiplet for Integrated Data Acquisition Systems
Liu, Monica
Chiplets have risen in popularity since their intermediate level of chip integration allows for high performance, low cost, and higher flexibility. There are currently programmable gain instrumentation amplifier chips on the market, which are widely used in industrial and instrumentation data acquisition systems. However, with built-in operational and fully differential amplifiers, these products cannot be easily upgraded as new and improved amplifiers are released to the market. To address this issue, this thesis proposes the design of a programmable gain chiplet that will offer the desired flexibility in changing a system’s gain, but will add the ability to interface with various amplifiers without sacrificing significant performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157034</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of Gradient Flow with Contrastive Learning</title>
<link>https://hdl.handle.net/1721.1/157033</link>
<description>Dynamics of Gradient Flow with Contrastive Learning
Tepe, Cem
Contrastive learning (CL), in di erent forms, has been shown to learn discriminatory representations for downstream tasks without the need of human labeling. In the representation space learnt via CL, each class collapses to a distinct vertex of a simplex on a hypersphere during training. This property, also seen in other types of learning tasks, might explain why CL works as well as it does. Having class collapse on the test distribution, which determines how well the model generalizes to new samples and new classes, is tied to class collapse on the training distribution under certain conditions as studied by Galanti et al. (2022). In the case of CL, minimizing the contrastive loss has been shown to lead to collapse during training by Graf et al. (2021). In a recent study, Xue et al. (2023) show that the minimizing the contrastive loss is not enough to observe class collapse in the representation space for a single layer linear model and that we need minimum norm minimizers for the collapse to happen. However, their results don't explain how class collapse can occur without adding an explicit bias. The implicit bias of the gradient descent is a likely candidate to explain this phenomena. Here, we investigate the gradient ow of the spectral contrastive loss and give a theoretical description of the learning dynamics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157033</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty &amp; robustness for single-cell studies</title>
<link>https://hdl.handle.net/1721.1/157032</link>
<description>Uncertainty &amp; robustness for single-cell studies
Shiffman, Miriam
The advent of new technologies capable of measuring molecular profiles at single-cell granularity, across thousands or millions of cells, offers unprecedented insight into the form, function, and circuitry of biological systems. At the same time, these technologies present particular statistical and computational challenges, including noise, sparsity, technical and biological variability, and multilevel sampling regimes. To distill relevant signal from biological phenomena, then, analyses must combine information in a careful and coherent way across cells. In light of these complexities, it is prudent that single-cell analyses incorporate notions of uncertainty and robustness to guide their interpretation and inform future decision making.&#13;
&#13;
This thesis makes two main advances in facilitating coherent, actionable quantification of uncertainty and robustness for single-cell studies. First, we provide a framework for generalizability of differential expression analysis that—unlike common statistical tools (significance, power, standard error)—does not rely on the assumption that the sample in hand is independent and identically distributed as future samples. Instead, we posit an alternate (complementary) lens on generalizability: could dropping a very small fraction of cells meaningfully alter the basic conclusions of differential expression? We develop an accurate and efficient approximation to estimate this dropping-data robustness metric for the key outcomes of differential expression, for independent observation and pseudobulk analyses. Broadening these gene-level results to a high-level, biologically meaningful summary, we overcome the inherently non-differentiable and combinatorial nature of gene set enrichment analysis to provide an additional approximation for the dropping-data robustness of top gene sets. Applied to public single-cell RNA-seq data of healthy and diseased cells, our metric identifies widespread nonrobustness across genes that extends to high-level nonrobustness of top gene sets. The second part of this thesis provides a full Bayesian framework for reconstructing probabilistic trees of cellular differentiation from single-cell profiles. Namely, motivated by the biology of differentiation and confronted with a lack of existing hierarchical models, we develop a new family of probabilistic trees where data is generated continuously along branches (and latent cell state evolves smoothly over the tree). We also develop two approaches, focusing on gene-level or cell-level variability, to model measurement noise arising from single-cell RNA-sequencing. In tandem, we construct a novel Markov chain Monte Carlo sampler over trees, including message passing with variable augmentation to accelerate inference. These techniques recover latent trajectories from simulated single-cell transcriptomes, and make progress toward inferring trajectories, with calibrated uncertainties, from real transcriptomes.&#13;
&#13;
I close by reflecting on common themes relevant to uncertainty and robustness for single-cell studies, including interplay between the continuous and the discrete, the challenge of summarization, the importance of cyclical model criticism, and a possible way forward through differentiable and probabilistic programming.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157032</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Material Recovery Potential from Solar Photovoltaics: Predictive Modeling and Characterization to Advance the Circular Economy</title>
<link>https://hdl.handle.net/1721.1/157031</link>
<description>Material Recovery Potential from Solar Photovoltaics: Predictive Modeling and Characterization to Advance the Circular Economy
Bakker, Nicole
In the next two decades, an exponentially growing quantity of waste will be generated as solar panels reach their end-of-life. Meanwhile, demand for new solar capacity will increase the value of key raw materials, underscoring the importance of recycling and movement toward a “circular economy”. However, uncertainties over the quantity and the exact material composition of solar panel waste hamper investments by recyclers, manufacturers, and governments. In this study, I construct a Material Flow Analysis model to forecast the global quantity of recoverable materials through 2100, informed by an experimental characterization of representative solar panels from the 1930s to 2020s. To account for potential changes in future demand, I develop two distinct scenarios: one explores the growing electricity demand from artificial intelligence use (‘Artificial Intelligence Boom’), while the other features renewable hydrogen production for steelmaking, shipping and the chemical industry (‘Green Hydrogen Takes Off’). The combined model predicts a lower material demand for silicon than previously anticipated in the base case, with a cumulative installed solar PV capacity of 50 TW and a waste volume of 3,600 metric megatonnes by 2100. This will require 45 megatonnes of solar-grade silicon by 2100, while 18 megatonnes could theoretically be obtained from recovered material. Achieving a circular economy for silicon is possible by the mid-2040s, but will require recovery rates above 70% and continued improvements in material efficiency as observed in the retrospective analysis. Recovery would suffice for all silicon demand through the mid-2060s, but not through 2100, because the demand for new solar panels and replacements outpace secondary supply. Of specific concern for material recovery is the material composition: results from characterization indicate the presence of toxic materials, including lead, and scarce elements in solar cells.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157031</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meat Me for Supper? Envisioning the Future of Protein Food</title>
<link>https://hdl.handle.net/1721.1/157030</link>
<description>Meat Me for Supper? Envisioning the Future of Protein Food
Maynard, Christopher Coleman
This report investigates future challenges associated with protein food and explores two proposed mitigation strategies for overcoming them: dietary change and cultivated meat. Utilizing IMPACT, this report assesses the food security dimensions of availability and economic access for protein food relative to the EAT-Lancet recommendations, projected to 2050, under various shared socioeconomic pathways. This work reveals a near universal over-supply of red meat as well as an under-supply in plant protein across UN member states, even as animal sources of protein far exceed their plant counterparts on a price per kilocalorie basis. Additionally, this report conducts a high level SWOT analysis of key issues in cultivated meat, finding that the technology platform could deliver meaningful environmental and health benefits, but without overcoming important technical and political barriers, will remain unavailable and inaccessible for the foreseeable future. Together, these findings offer insights for food and agricultural policymakers interested in planning and preparing for protein-related issues in the next quarter-century. This report concludes with policy recommendations, intended primarily for the United States.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157030</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>NeuralMOVES: Extracting and Learning Surrogates for&#13;
Diverse Vehicle Emission Models</title>
<link>https://hdl.handle.net/1721.1/157029</link>
<description>NeuralMOVES: Extracting and Learning Surrogates for&#13;
Diverse Vehicle Emission Models
Ramirez Sanchez, Edgar
Technological advancements and interventions in the transportation sector play a crucial role in addressing climate change, given its major contribution to greenhouse gas emissions. The industry actively explores electrification, automation, and Intelligent Infrastructure to mitigate emissions. However, the successful design and implementation of these solutions require accurate and representative emission models. The Motor Vehicle Emission Simulation (MOVES) serves as the gold standard emission software provided by the Environmental Protection Agency (EPA). Despite its prominence, using MOVES faces challenges, including a steep learning curve and technical complexities. This makes it cumbersome for macroscopic analysis and unsuitable for microscopic analyses like eco-driving, which demands emissions estimation for individual steps. To address these issues, we present a comprehensive family of high-performance and lightweight CO₂ emission models devised through reverse engineering MOVES and surrogate learning. Our models show a promising 6% end-to-end error relative to MOVES, exhibit significant differences from alternative reduced-order models, and offer improved precision. The implications of our work are twofold: our models simplify GHG emission evaluation in transportation-related analyses by providing a faster, programmatic alternative to MOVES and improve control-based approaches by offering microscopic and environment feature-rich models compared to alternative models.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157029</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation into Practical Aluminum Scrap for Emergency Power Fuel in Disaster Response Situations</title>
<link>https://hdl.handle.net/1721.1/157028</link>
<description>An Investigation into Practical Aluminum Scrap for Emergency Power Fuel in Disaster Response Situations
Blanks, Lauren J.
As natural disasters become more frequent and severe, pitfalls of emergency logistics are exacerbated. Protracted time between the disaster and the restoration of critical infrastructure, like the power grid, can extend beyond hours or days. In the meantime, communities are left without critical resources like electricity. To address this gap, this research seeks to investigate the possibility of a system that would leverage the debris fields of a disaster to a community's advantage. Building on MIT researchers' activation of high purity aluminum to produce heat and hydrogen in a reaction with water, aluminum scrap from the field could be used to generate hydrogen for fuel cell power systems. Therefore, practical aluminum scrap, specifically the used beverage can, was investigated for its ability to react efficiently and produce hydrogen under the constraints of expeditionary equipment and techniques. Moreover, a preliminary characterization of the reaction's gas output informed the potential for fuel cell contamination. Finally, the proposed system's feasibility within the disaster policy framework is discussed. Together, these findings underscore the potential to harness aluminum scrap as a post-disaster energy source, encouraging further research.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157028</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lineage-level within-species dynamics in the human facial skin microbiome</title>
<link>https://hdl.handle.net/1721.1/157027</link>
<description>Lineage-level within-species dynamics in the human facial skin microbiome
Baker, Jacob S.
Multiple lineages of a bacterial species can coexist in a community. These extremely closely-related clades originate from the recent immigration of individual cells, whose evolution over short timescales (years) results in minute genomic diversity (101 SNPs/genome). Each has distinct origins, and the mutations they contain can reveal their individual evolutionary and ecological history. However the difficulty of differentiating coexisting lineages limits the phylogenetic resolution at which community dynamics can be studied. Here, I describe methods to cluster large sets of diverse genomes into lineages and apply them to the observation of natural lineage-level assembly dynamics in the human facial skin microbiome. In Chapter 2, I use new methods to improve lineage-level clustering and delineate 4,055 genomes of C. acnes and S. epidermidis isolates from human facial skin into 167 lineages. In Chapter 3, I use these data to observe natural transmission events and assembly dynamics of the facial skin microbiome. I find that the gain and loss of individual C. acnes and S. epidermidis lineages underlies their apparent stability at the species level, and that these dynamics also change throughout the human lifespan. Lineages of S. epidermidis are replaced in unexpectedly fast cycles, and C. acnes lineages are acquired during developmentally-driven population expansion. By advancing current methods, I enabled the observation of new ecological dynamics at an unprecedented resolution. The dynamics described here will influence the development of therapeutic strains with durable engraftment, and inspire the study of their effects on hosts, such as the immune consequences of lineage-level turnover.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157027</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Documentation as a Tool for Algorithmic Accountability</title>
<link>https://hdl.handle.net/1721.1/157026</link>
<description>Documentation as a Tool for Algorithmic Accountability
Curtis, Taylor Lynn
This thesis argues that civil liability should rest on the deployer's understanding of system behavior, and that documentation is the necessary tool to accomplish this goal. This work begins by establishing the ``hole'' in current approaches to AI risk regulation, the lack of a civil liability regime. It also highlights that civil liability is an already existing and effective regulatory tool that can be applied to AI. The rest of this thesis develops this argument by looking at what is necessary for such a framework to exist. It argues that an understanding of system behaviour is essential and achievable through documentation. It is divided into two substantive chapters. Firstly, Chapter 2 outlines how system behaviour can inform policy through documentation, linking the necessity of documentation to liability and proposing a concrete liability scheme based on documenting system understanding. Secondly, Chapter 3 discusses how documentation can alter a person's understanding of system behaviour, presenting a user study that demonstrates how system understanding can be achieved through documentation and structured data interaction. It argues that testing and system understanding are not insurmountable challenges and that by engaging in a relatively simple process, AI deployers can better understand the behaviour of their models. Overall, this thesis provides a methodical guide to understanding AI system behaviour and the establishment of a new pathway for effective regulation, arguing for the understanding of system behaviour and documentation at deployment as the path forward to achieve civil liability in AI.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157026</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI-Augmented Interface for Incremental App Development in MIT App Inventor</title>
<link>https://hdl.handle.net/1721.1/157025</link>
<description>AI-Augmented Interface for Incremental App Development in MIT App Inventor
Granquist, Ashley
The recent revolutionary advancements in Artificial Intelligence (AI) have presented im- mense opportunities and challenges in computer science education. This thesis presents the development of an AI-powered tool built on top of MIT App Inventor to help students in- crementally design mobile applications. The tool allows students to describe desired changes to their MIT App Inventor mobile applications in natural language and have those changes be implemented automatically. Students can alternate between manually editing their app and using this tool, enabling them to collaborate with AI and incrementally develop apps with a degree of assistance from AI that meets their needs and is appropriate for their skill level and workflow preferences. This thesis also explores the benefits and detriments of such a tool, as well as observations and lessons learned from studying the ways students interact with the tool during a pilot study.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157025</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resiliency Oriented Scenario Generation Framework for Natural Gas Infrastructure</title>
<link>https://hdl.handle.net/1721.1/157024</link>
<description>Resiliency Oriented Scenario Generation Framework for Natural Gas Infrastructure
Lahogue, Malo
Traditionally, NG's impact on power supply has been studied from a reliability perspective, focusing on frequent and low-impact events. Furthermore, power-NG interdependence has been considered at a local scale, with few possibilities for extension to future climate impacts. Our work contributes to a framework for scenario-based resilience quantification of regional power systems under power-NG interdependencies. Specifically, we develop a scenario generation approach to model disruptions in the intra-regional transmission infrastructure as well as supply restrictions due to contingencies in inter-regional NG supply chains. To account for the interregional interdependencies through the import capacity of NG into the regional system, we implement a Long Short-Term Memory (LSTM) model that predicts NG import capacity probability density based on weather conditions along transregional supply pipelines. Our ML model does not require detailed modeling of gas extraction rates and flows along pipelines since such information is not readily available. Furthermore, we develop a sampling procedure to capture low-probability but potentially severe disruption scenarios within the regional transmission infrastructure. To compute the corresponding probabilities, we utilize a physically-based structural reliability model for pipelines. &#13;
 &#13;
Crucially, by sampling the scenarios first and then estimating the corresponding probabilities, we account for low-probability ``rare’’ events that can negatively impact the reliability of power supply. The resulting scenario set enables more precise quantification of power system resilience to correlated transmission and supply disruptions in the NG infrastructure. Since we utilize weather data to forecast NG import capacities as well as compute pipeline disruption probabilities, our work is well-suited for the integration of future climate projections in the risk-sensitive planning and resilient operations of power-NG systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157024</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tangible Telepresence: Distributed and Synchronous Tangible Interfaces for Enhancing Interpersonal Connectedness over Time and Space</title>
<link>https://hdl.handle.net/1721.1/157023</link>
<description>Tangible Telepresence: Distributed and Synchronous Tangible Interfaces for Enhancing Interpersonal Connectedness over Time and Space
Choi, Kyung Yun
In today's hyper-connected world, digital communication technologies have transformed how people maintain relationships across distances. However, constant digital stimuli and the pressure to always be available can lead to overwhelm, stress, and a lack of personal space. This thesis explores the concept of Tangible Telepresence, enhancing connectedness between intimate dyads through gestural engagement and seamless transitions between synchronous and asynchronous communication.&#13;
&#13;
To demonstrate this concept, this thesis introduces-TeleTangibles- distributed and synchronous tangible interfaces that expand the bandwidth of interpersonal communication. TeleTangibles allow users to adjust their personal boundaries by moving between real-time and slow-paced communication within their physical space. The design space of TeleTangibles encompasses interaction spaces and expression levels, from abstract to concrete, through different motions and forms, focusing on engaging intimate dyads' nonverbal interactions and their perception of their relationship.&#13;
&#13;
The thesis presents two distinct TeleTangible examples, TelePop and Picto, addressing different aspects of the design space and demonstrating asynchronous communication through recording, replaying, and sharing tangible interactions remotely. Insights from these projects contribute to a deeper understanding of TeleTangibles' design space and the factors influencing their effectiveness in promoting interpersonal connectedness and social presence.&#13;
&#13;
The main contributions of this thesis are threefold: First, it extends real-time synchronous remote interaction to include asynchronous interaction through time-delayed responses, allowing individuals to adjust their levels of connectedness and enabling smooth transitions between interaction modes. Second, it proposes essential functionalities for developing Tangible Telepresence, illustrated through two TeleTangible examples, including recording and replaying interaction history and establishing mutual awareness through shared tangible languages and experiences. Third, it highlights that complex meanings or detailed information are not essential for strengthening connectedness when mutual awareness is established, as users perceive TeleTangibles as various forms of interaction that reduce the pressure of immediate response while confirming each other's status.&#13;
&#13;
This research contributes to the fields of Tangible User Interfaces and interpersonal communication by providing a new approach to expanding remote interpersonal communication media through playful gestural engagement. It offers a timely exploration of the challenges of maintaining social connectedness and respecting personal boundaries in an increasingly digital world.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157023</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contractor Learning and Home Energy Efficiency in Heat Pump Installations</title>
<link>https://hdl.handle.net/1721.1/157022</link>
<description>Contractor Learning and Home Energy Efficiency in Heat Pump Installations
Ontiveros, Johnattan H.
The displacement of fossil-fuel based heating is essential for achieving decarbonization in the building sector, which represents about a third of national emissions in the United States. Electric heat pumps are the primary technology needed to do so, but widespread adoption is hindered by a variety of factors including higher upfront costs and a shortage of experienced labor to fulfill installations. This work examines the role of learning on the cost and size of heat pump installations throughout the Massachusetts Clean Energy Center (MassCEC) rebate program. We find that as contractors gain experience, heating systems are downsized at the cost of less hours of displaced fossil-fuel based heating. This learning impact is strongest for homes with a natural gas backup heater, which is the cheapest source of heating in Massachusetts followed by electric heat pump heating. We then analyze the structure of the MassCEC rebate, and its potential influence on the benefits of the program.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157022</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Community-Driven Determination of Values for Language Models</title>
<link>https://hdl.handle.net/1721.1/157021</link>
<description>Empowering Community-Driven Determination of Values for Language Models
Raman, Deepika
Emerging technologies like Artificial Intelligence and Large Language Models are often developed in Western contexts and carry implicit values, from developer choices or underlying training data, which are not adequately representative of the diverse contexts in which they are deployed. The resultant misalignment from the lack of engagement with non-Eurocentric value paradigms results in inadequate, and potentially harmful outcomes that impact these unconsidered communities. To codify fundamentally subjective human values therefore necessitates the elicitation of these nuances through the inclusion and involvement of these very communities.&#13;
&#13;
This thesis argues that participants’ lack of familiarity with new technologies like Artificial Intelligence impacts their engagement and contribution to participatory processes of AI development. This thesis also helps demonstrate how grounded theory approaches can be leveraged to contextualize awareness-building efforts that can potentially empower community participation by addressing such familiarity gaps.&#13;
&#13;
This two-fold objective of (i)eliciting community-relevant attributes for language model alignment (ii)through the necessary familiarization of the technology in question is demonstrated through the means of sample case studies. A grounded participatory process CALMA (Community-aligned Axes for Language Model Alignment) is designed and evaluated through these cases to illustrate this contextualized alignment exercise. Learnings from this comparative case study are then extended to explore avenues for communities and institutions to adopt similar techniques that center the voices of the final users.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157021</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Energy Conservation: Low-Cost Interventions for Commercial and Residential Settings</title>
<link>https://hdl.handle.net/1721.1/157020</link>
<description>Empowering Energy Conservation: Low-Cost Interventions for Commercial and Residential Settings
Ha, Lan L.
This thesis aims to investigate the effectiveness of low-cost interventions in promoting energy conservation in commercial and residential environments. The first chapter employs social norms to design and analyze three behavioral change programs in a large biopharmaceutical company, with a focus on reducing electricity consumption and plastic waste. The second chapter evaluates the effectiveness of a new behavioral initiative that aims to reduce residential electric and gas consumption. We employ econometric and machine learning techniques to measure average and heterogeneous treatment effects, as well as to identify disparities in households with the highest versus lowest reductions. Covering the process from designing to evaluation, these chapters collectively offer a holistic perspective on the application of low-cost behavioral nudges in both workplace and residential energy usage. The implications drawn from our findings hold significant relevance for corporations, utilities, households, policymakers, and researchers alike, offering invaluable insights in promoting sustainable practices in both the workplace and the home.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157020</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Structure of the Registry Hall at Ellis Island</title>
<link>https://hdl.handle.net/1721.1/157019</link>
<description>The Structure of the Registry Hall at Ellis Island
Wilson, Ruth Hodin
This thesis presents the historical and structural analysis of the Guastavino barrel vault at the Registry Hall on Ellis Island. The Guastavino Construction Company's innovative tile structures from the late 18th and early 19th centuries, characterized by their efficiency in material use and formwork, are not fully understood by many engineers, especially in terms of their structural behavior as unreinforced masonry structures. The unique aspect of the Registry Hall vault is its construction below a steel truss framed ceiling system, a configuration that has not been previously studied.&#13;
&#13;
The primary objective of this study is to provide structural engineers with techniques for analyzing an unreinforced masonry structure in conjunction with a steel frame. Additionally, it aims to provide historical context by exploring how the Registry Hall structure fits into the history of the Guastavino Company. The structural behavior of the system is analyzed through three separate cases:&#13;
&#13;
1. Graphical analysis for the vault alone (Case 1)&#13;
2. Finite element analysis for the truss carrying the entire system (Case 2)&#13;
3. Analysis of the combined system (Case 3)&#13;
&#13;
Case 1 demonstrates the vault is stable on its own and the thrust forces are resolved in the columns. Case 2 demonstrates the truss has the capacity to support all loads, including the weight of the vault. Case 3 presents a third solution where the truss carries half the weight of the vault, indicating the two systems can work together effectively. &#13;
&#13;
This study offers three structural solutions for the complex ceiling at Registry Hall, demonstrating that there are infinite solutions for Guastavino structures. This improved understanding of a Guastavino barrel vaults' structural behavior not only aids in evaluating the current state of Registry Hall, but also lays a foundation for analyzing historic masonry structures that incorporate a steel system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157019</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overturning of No-Tension Towers</title>
<link>https://hdl.handle.net/1721.1/157018</link>
<description>Overturning of No-Tension Towers
Moir, Katherine
This study investigates the overturning behavior of leaning masonry towers on a rigid foundation. Unreinforced masonry is assumed to be incapable of withstanding tension, thus anticipating a progressive fracturing to occur outside the compressive zone of masonry towers as they incline under the force of self-weight alone. A theoretical model for the analysis of rectangular towers is extended to cylindrical towers, where overturning is assumed to occur when the fracture reaches through the entire width of the tower. The results of the theoretical model offer an approximate prediction for the critical angle of inclination that may be reached by a leaning no-tension cylindrical tower of variable slenderness and hollowness. A comparison of the predictions for each of the two tower geometries shows that the predicted critical angles of overturning are very close, while the cylinder is likely to begin cracking at lower inclinations compared to rectangular towers. The theoretical predictions for both rectangular and cylindrical towers are validated experimentally by tilting masonry model towers until failure. The experimental results are found to have reasonable agreement with the predictions, though overturning occurs earlier than predicted in all cases, which is attributed to imperfections in the models and scaling effects. As such, the theoretical predictions are unconservative for the critical angle of overturning of the models in the experiment. Furthermore, two case studies are conducted for existing leaning masonry towers in Italy, where theoretical predictions for their critical angles of overturning are put forth. The results of the case studies indicate that the Garisenda tower in Bologna is relatively close to its theoretical critical inclination, while the Leaning Tower of Pisa is not close. Both towers are found to be very close to their predicted angle of first cracking. However, the assumption of a rigid foundation does not account for the possibility of soil failure which remains a risk for leaning towers on compressible soils. Overall, the research guides further understanding of the failure conditions of masonry towers, which is useful in assessing their safety and preventing catastrophic collapses.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157018</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tree-based Data Replay for More Efficient LLM Continual Learning</title>
<link>https://hdl.handle.net/1721.1/157017</link>
<description>Tree-based Data Replay for More Efficient LLM Continual Learning
Bailey, Brian
As Large Language Models (LLMs) gain popularity, they face a crucial challenge: effectively updating their knowledge bases with new data while retaining knowledge of prior information. This challenge is compounded by the considerable computational resources and time required to do so. This problem has been previously addressed using multiple approaches, including data replay, Elastic Weight Consolidation (EWC), and others. This study introduces an evolutionary tree-based data replay method designed to enhance the efficiency of LLMs’ continual training. It leverages the evolutionary relationships among domain-specific data to inform the replay strategy, selectively excluding similar data from the training of current subdomains to optimize efficiency. Initial experiments identified Mistral-7B as the appropriate model for this analysis. Subsequent tests assessed its performance under different data replay configurations, focusing on perplexity as the primary performance measure. The results indicate that focused data replay maintains model performance and enhance training efficiency. Models trained under restrictive replay conditions—excluding data from parent nodes—achieved perplexity scores within 1.5% of the baseline and reduced training time by up to 20%. Moreover, an ablation study established that a minimum replay ratio of 0.4:1 is essential to keep performance within 8.2% of the baseline. The findings suggest significant potential for structured data replay in improving continual learning processes for LLMs. Future research should explore data selection based on similarity metrics or automatic data categorization to enhance scalability and applicability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157017</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimally invasive neuromodulation using mechanically-sensitive ion channels and magnetically-actuated nanotransducers</title>
<link>https://hdl.handle.net/1721.1/157016</link>
<description>Minimally invasive neuromodulation using mechanically-sensitive ion channels and magnetically-actuated nanotransducers
Malkin, Elian
Traditional methods of neuronal activity modulation, like pharmacological interventions and noninvasive techniques such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) have limitations in specificity and penetration depth. Deep brain stimulation (DBS), while effective, is invasive and carries surgical risks. This thesis advances the approach of utilizing magnetic nanoparticles as mechanical force transducers to achieve minimally invasive, wireless neuromodulation using magnetic fields as the stimulation modality. By leveraging magnetic fields and mechanically sensitive ion channels, this method aims to provide precise neuronal activation of deep neural circuits without surgery. We describe the molecular biology behind conferring mechanosensation to neurons, the design of a membrane targeting mechanism via SNAPtags expressed on neuronal membranes, and the observed neuromodulatory effects for a gamut of mechanoreceptors and stimulation conditions. Calcium imaging results demonstrate that this method of nanotransducer targeting can elicit neuronal responses at 40mT even via endogenous ion channels, and that greater amplitudes of response can be achieved through mechanosensitive ion channel expression and increased stimulation strength. We also develop data analysis code that is highly automated and employs advanced curve-fitting techniques to isolate the calcium imaging signal from background noise and fluorescence decay. The findings described in this thesis suggest that minimally-invasive mechanical neuromodulation can offer a safe and precise alternative to DBS for both clinical and research applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157016</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Parallel Algorithms for Planarity Testing</title>
<link>https://hdl.handle.net/1721.1/157015</link>
<description>New Parallel Algorithms for Planarity Testing
Hu, Amelia Y.
Planar graphs (defined as graphs in which no edges cross) have special properties and are often used in applications such as circuit design or transportation networks. While many linear work implementations of planarity testing algorithms exist, to our best knowledge, there is no practical implementation of a parallel planarity testing algorithm. In this thesis, we will describe and analyze two new parallel algorithms for planarity testing, both derived from the Boyer-Myrvold algorithm. First, we will present a divide-and-conquer approach, where the graph's edges are evenly distributed among worker threads. Each thread independently executes the sequential Boyer-Myrvold algorithm on its designated subgraph. Then, pairs of subgraphs are merged by embedding the edges between subgraphs with modified Boyer-Myrvold methods. The primary challenge of the divide-and-conquer approach is the merge step as determining the relative positions of subgraphs is a complicated and difficult process. Next, we describe the design and implementation of a new and simpler parallel algorithm. This algorithm modifies the Boyer-Myrvold algorithm by processing vertices in layers from the bottom-up (rather than sequentially by reverse DFI order). The computation in each layer is parallelized. On planar graphs, this algorithm achieves 2.4--2.7 times speedup over the sequential algorithm when run on 16 cores. On non-planar graphs, the performance gain is even more significant, with speedups ranging from 9 to 22 times on 16 cores.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157015</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Data Heterogeneity on Distributed Linear System Solvers</title>
<link>https://hdl.handle.net/1721.1/157014</link>
<description>Effects of Data Heterogeneity on Distributed Linear System Solvers
Velasevic, Boris
We focus on the fundamental problem of solving a system of linear equations. In particular, we are interested in distributed linear system solvers, where one taskmaster coordinates any number of workers to attain a solution. There are two predominant and fundamentally different ways of doing this: optimization-based and projection-based solvers. Although there is extensive literature on both classes of algorithms, a rigorous analytical comparison of their performance is lacking. Consequently, there is no concrete understanding of why numerical experiments show that projection-based solvers tend to perform better in many real and synthetic scenarios. In this work, we develop a framework for such analysis, and we use that framework to investigate the comparison of optimization-based and projection-based solvers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157014</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification</title>
<link>https://hdl.handle.net/1721.1/157013</link>
<description>MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification
Xu, Muhua
Motivated by the significant progress in NLP prompt learning, there have been great research interests recently in adopting the prompting mechanism for graph machine learning. Despite the prior success of prompting methods applied in node-level and graph-level learning tasks, subgraph-level tasks are highly underexplored, and the potential of prompting remains unclear. This thesis fills this gap by exploring the prompting mechanism for subgraph classification, which is a much more challenging task as it requires understanding both global and local graph structures. In this work, we build upon state-of-the-art self-supervised graph learning models to develop a subgraph-specific prompting scheme Membership Prompt (MPrompt) based on traditional graph neural networks (GNN). Our proposed prompting scheme relies on node membership knowledge to help GNN distinguish between border and local connections, which increases its expressive power while maintaining the prompt’s independence from any specific dataset or model architecture. Additionally, we also present Subgraph Reconstructive Pretraining (SRP) which can provide MPrompt with better structural embeddings during pretraining. Experiments are conducted on both synthetic and real-world datasets, including protein function prediction and social network analysis. Our method demonstrated performance improvement under few-shot experiment setting and maintained comparable performance in full-shot settings while requiring less computation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157013</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Analysis of a Transformer-Based Solid-State Relay</title>
<link>https://hdl.handle.net/1721.1/157012</link>
<description>Design and Analysis of a Transformer-Based Solid-State Relay
Mondal, Neelambar
Automatic Test Equipment (ATE) systems require relays to perform complex high-speed tests on semiconductor devices. However, existing relays all come up short in some aspect. Electromechanical reed relays have a limited lifetime and slow switching speeds, while solid-state photoMOS relays have high on-resistance and low bandwidth. This thesis presents the design, simulation, and analysis of a new solid-state relay tailored for ATE applications. We use Analog Devices’ iCoupler technology to design this relay, relying on on-chip transformers to provide reliable input-to-output isolation. In Cadence simulations, the iCoupler relay achieves 100 mOhm on-resistance, 7.5 us turn-on time, and 4.8 GHz output 3dB bandwidth.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157012</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clinical Question-Answering over Distributed EHR Data</title>
<link>https://hdl.handle.net/1721.1/157011</link>
<description>Clinical Question-Answering over Distributed EHR Data
Jiang, Emily
Electronic health records (EHRs) have become standard in US clinical practice. However, the distributed, dynamic, private, and jargon-dense nature of medical data is a barrier in harnessing Large Language Models (LLMs) for the domain. Retrievalaugmented generation (RAG), in which an LLM is provided with both the question and context returned by an external retriever, is a promising technique for addressing the unique qualities of clinical text. LLMs using RAG can answer questions about patient records without training on privacy-sensitive data; updated records can also be queried immediately without finetuning. By exposing the source documents that inform the model response, RAG enables greater physician interpretability as well as reduced hallucination, both of which are crucial for safe deployment in healthcare. This thesis presents FedRAG, a retrieval-augmented clinical question-answering (QA) system for clinicians to explore trends in patient data across distributed storage. We introduce a novel hierarchical design for federated document retrieval, in which leaf nodes perform local similarity search while non-leaf nodes route queries based on access policies and aggregate documents returned by their children. We also create a dataset on clinical trends over the MIMIC-IV database for the evaluation of QA systems on EHR data. FedRAG is implemented in Python as a federation of Flask servers using LangChain, the Qdrant vector database for retrieval, and GPT-3.5 Turbo for generation. We present a case study of three medical organizations, and find that the federation scheme results in no loss of quality against a centralized baseline. We explore the impact of resource accessibility among users with varying access permissions, observing that retrieval and generation quality degrade reasonably as document access is restricted. Finally, we evaluate performance in the key abilities required of RAG systems. We conclude that despite remaining challenges in achieving high retrieval quality and noise robustness, FedRAG is effective at synthesizing clinical trends through information integration across EHR documents.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157011</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rethinking the Evaluation of Compositional Reasoning for Modern VLMs</title>
<link>https://hdl.handle.net/1721.1/157010</link>
<description>Rethinking the Evaluation of Compositional Reasoning for Modern VLMs
Huang, Irene Y.
Recent advancements in modern Vision-Language Models (VLMs), comprising a visual encoder coupled with a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in Compositional Reasoning (CR). CR entails grasping the significance of attributes, relations, and word order. This prompts a crucial question: have VLMs effectively tackled the CR challenge? Our conjecture suggests that existing CR benchmarks may not adequately push the boundaries of modern VLMs due to their reliance on a negative text generation pipeline. Consequently, the negatives produced often deviate either as outliers from the natural language distribution learned by VLMs’ LLM decoders or as improbable within the corresponding image context. To redress these limitations, we propose a novel pipeline integrating GPT-4V alongside a suite of contemporary open-source VLMs. Through the application of in-context-learning and prompt engineering methodologies, our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, to establish a robust CR benchmark, also subsequently validated manually. The meticulously curated dataset evinces a noteworthy, up to 45%, decrease in CR performance compared to preceding benchmarks, thereby reinstating the CR challenge even for state-of-the-art VLMs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157010</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards A Robust Integrated Urban Mobility System: Public Transit and Ride-Sharing Systems</title>
<link>https://hdl.handle.net/1721.1/157009</link>
<description>Towards A Robust Integrated Urban Mobility System: Public Transit and Ride-Sharing Systems
Guo, Xiaotong
The global pandemic has fundamentally changed lifestyles, impacting how, when, and where people travel within cities. In this post-pandemic world, urban mobility demand patterns are experiencing significant shifts. To manage the growing uncertainty in urban mobility, there is a growing need to develop a robust urban mobility system. This system must be adaptable to evolving demand patterns while ensuring efficiency and environmental sustainability in transporting large populations. Additionally, the increasing popularity of shared mobility and rapid advancements in autonomous driving technologies are creating new opportunities for innovative approaches to urban transportation systems.&#13;
&#13;
This dissertation delves into the development of a robust and integrated urban mobility system for the future, with a focus on the public transit and ride-sharing systems. While the advent of shared mobility platforms such as Uber and Lyft, along with Autonomous Mobility-on-Demand (AMoD) services like Waymo and Cruise, have revolutionized urban travel, public transit systems remain the backbone of urban mobility. This is attributed to their capacity to move large numbers of people over long distances at a relatively low cost and an environmentally friendly way. Thus, this study aims to enhance the robustness of both public transit and ride-sharing systems and explore ways to seamlessly integrate these two components. The dissertation presents five distinct studies to elaborate on these objectives.&#13;
&#13;
The first three studies focus on the vehicle rebalancing problem, which is one of the most critical strategies in ride-sharing operations. An effective rebalancing strategy can significantly reduce empty miles traveled and reduce customer wait times by better matching supply and demand. While the supply (vehicles) is usually known to the system, future passenger demand is uncertain. The first study proposes a novel approach to better immunize rebalancing decisions against demand uncertainty. This approach, namely the matching-integrated vehicle rebalancing (MIVR) model, incorporates driver-customer matching into vehicle rebalancing problems to produce better rebalancing strategies. For further protection against uncertainty, robust optimization (RO) techniques are introduced to construct a robust version of the MIVR model. Problem-specific uncertainty sets are designed for the robust MIVR model. The second study further explores different approaches for handling demand uncertainty in the vehicle rebalancing problem. There are two ways to handle uncertainty. First, the point-prediction-driven optimization framework involves predicting the future demand and then producing rebalancing decisions based on the predicted demand. Second, data-driven optimization approaches directly prescribe rebalancing decisions from data. In this study, a predictive prescription framework is introduced to this problem, where the benefits of predictive and data-driven optimization models are combined.&#13;
&#13;
Although vehicle rebalancing algorithms could improve system efficiency, there exists a detrimental feedback loop where underserved communities with low demand density are unintentionally discriminated. To resolve this fairness issue, the third study develops algorithms for vehicle rebalancing that aim to minimize disparity within the system. Grasping the concept of disparity is a foundation for understanding fairness in the ride-hailing system. The vehicle rebalancing encompasses two critical aspects: upstream demand forecasting and downstream vehicle repositioning. The issues of disparities within both these components are addressed. To reduce disparity in demand prediction, we implement a strategy utilizing a Socio-Aware Spatial-Temporal Graph Convolutional Network (SA-STGCN), aimed at improving demand forecast accuracy while reducing discrepancies in prediction errors across diverse regions. For equitable repositioning of the supply side vehicles, we introduce a disparity-reducing MIVR system. This system is designed to facilitate a balanced vehicle distribution, ensuring that ride-hailing services are accessible equitably across different areas. &#13;
&#13;
The fourth study focuses on the robustness of public transit systems. Limited studies have considered demand uncertainties when designing transit schedules. To better address demand uncertainty issues inherent in public transit systems, this study utilizes the RO framework to generate robust transit schedules against demand uncertainty. A nominal (non-robust) optimization model for the transit frequency setting problem (TFSP) under a single transit line setting is first proposed. The model is then extended to the RO-based formulation to incorporate demand uncertainty. The large-scale origin-destination (OD) matrices for real-world transit problems present computational challenges to solve the optimization problem. To efficiently generate robust transit schedules, a Transit Downsizing (TD) approach is proposed to reduce the dimensionality of the problem. &#13;
&#13;
The last study focuses on the integration of emerging AMoD systems with existing public transit networks. We propose a novel optimization framework to generate the system design of the Transit-Centric Multimodal Urban Mobility with Autonomous Mobility-on-Demand (TCMUM-AMoD) at scale. The system operator (public transit agency) determines the network design and frequency settings of the PT network, fleet sizing and allocations of the AMoD system, and the pricing for using the multimodal system with the goal of minimizing passenger disutility. Passengers' mode and route choice behaviors are modeled explicitly using discrete choice models. A first-order approximation algorithm is introduced to solve the problem at scale. Using a case study in Chicago, we show the potential to generate integrated urban mobility systems in different demand scenarios.&#13;
&#13;
The final chapter summarizes the whole dissertation and outlines potential avenues for future research directions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157009</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Implementation of the U.S. Hydrogen Production Tax Credit</title>
<link>https://hdl.handle.net/1721.1/157008</link>
<description>Modeling and Implementation of the U.S. Hydrogen Production Tax Credit
Giovanniello, Michael A.
Low-carbon hydrogen (H2) could contribute to achieving long-term climate goals by supporting the decarbonization of several hard-to-abate industries. The U.S. Inflation Reduction Act includes a tiered hydrogen production tax credit (PTC) awarded for producing H2 below certain emissions thresholds. One pathway for producing PTC-eligible H2 is water electrolysis supplied with low-carbon electricity. But assessing the systems-level emissions associated with electrolytic H2 is challenging, not only because instantaneous power flows from a particular producer cannot be directly associated with a particular user, but also because of the risk that electrolyzers might divert clean electricity away from the grid. Following the passage of the IRA, there has been a vigorous debate focusing primarily on the time-matching requirements — that is, the period over which electricity use must match production from contracted generators — for grid-connected H2 production to receive the PTC.&#13;
&#13;
Applying a macro-energy systems model to case studies of Texas and Florida, we show that divergent results in the literature, which presented a conundrum for regulators trying to pick between policy options, are explained by different interpretations of the proposed “additionality” requirement. Specifically, the emissions associated with H2 production under different “time-matching” requirements are conditional on how additionality is modeled. We further show that the interaction of these qualifying time-matching requirements with other energy system policies could reduce the merits of more stringent time-matching requirements. For instance, if a region has a relatively high renewable portfolio standards (RPSs) to enable grid decarbonization, we show that less stringent (and therefore less costly) time-matching requirements are sufficient to avoid any increases in system-level emissions. &#13;
&#13;
Building on this analysis, we explore how uncertainty in inter-annual variable renewable energy (VRE) generation complicates the implementation of stringent PTC requirements. We confirm that a system design that accounts for inter-annual VRE uncertainty comes at a cost premium — a reality ignored by the existing literature. In addition, we show that inter-annual VRE uncertainty will necessitate the formation of markets for hourly electricity attribution certificates (EACs) to make up for inevitable shortfalls in supply of contracted VRE electricity supply under an hourly time-matching requirement. &#13;
&#13;
We recommend that the Treasury adopt a phased and regionally differentiated approach to implementing the PTC — regions without RPS policies could transition to an hourly time-matching requirement in the mid-term (e.g., by 2030), whereas regions with sufficient RPS policies could continue with looser requirements. In addition to PTC implementation, these results are relevant to the broader field of Scope 2 emissions accounting for voluntary (e.g. corporate net-zero goals) and regulatory purposes. As more private enterprises, such as data centers owners, pursue voluntary measures to reduce their electricity-related emissions, our work provides a foundation for further research into clean energy procurement standards (voluntary or mandated) that support power sector decarbonization.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157008</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tradeoffs Between Aboveground and Soil Carbon Accumulation Following Forestation</title>
<link>https://hdl.handle.net/1721.1/157007</link>
<description>Tradeoffs Between Aboveground and Soil Carbon Accumulation Following Forestation
Schug, Jennifer Lin
Recent decades have seen a rapid increase in global warming due to anthropogenic greenhouse gas emissions. One prevalent climate change mitigation strategy is tree planting, as trees sequester large amounts of carbon in their aboveground biomass. However, there is emerging evidence that under some conditions, soil carbon decreases following forestation, offsetting the carbon accumulated aboveground and rendering carbon sequestration efforts ineffective. The factors driving these changes in net ecosystem carbon are currently unknown. Here, we conducted a global meta-analysis on the factors affecting aboveground biomass versus soil carbon (SOC) accumulation following forestation in grasslands and croplands. We considered the effects of prior land use, regrowth strategy, mycorrhizal associations, and environmental factors on total ecosystem carbon and SOC accumulation over time. Results indicate that while there is a tradeoff between SOC and aboveground carbon accumulation, the loss of SOC does not negate the increase in aboveground carbon following forestation. Sites with low initial SOC before forest establishment accumulate more SOC than sites with high SOC, regardless of prior land use. Overall, forest stand age, prior land use, regrowth strategy, and mycorrhizal associations drive carbon accumulation over time and should be considered in the context of future forestation projects implemented for carbon sequestration.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157007</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Segment Anything on the Edge</title>
<link>https://hdl.handle.net/1721.1/157006</link>
<description>Efficient Segment Anything on the Edge
Stiles, Nicole
The Segment-Anything Model (SAM) is a vision foundation model facilitating promptable and zero-shot image segmentation.  SAM-based models have a wide range of applications including autonomous driving, medical image segmentation, VR, and data annotation.  However, SAM models are highly computationally intensive and lack a flexible prompting mechanism.  On an NVIDIA A100 GPU, SAM runs at 11 frames/second, missing the mark for real-time performance and preventing the usage of SAM on edge devices.  To tackle both the latency constraint and the prompt flexibility constraint, we introduce GazeSAM, a new real-time gaze-prompted image segmentation model.  GazeSAM uses face and gaze detection to determine the direction of a user's gaze, object detection to find candidate objects of interest, depth estimation to perform background detection, and image segmentation to generate masks.  The final output is a mask segmenting the object at the focus of the user's gaze.  By performing algorithmic optimizations, employing inference engines, and applying FP16 and INT8 quantization, we achieve a 24x speedup relative to the baseline FP32 PyTorch implementation.  GazeSAM runs at a speed of over 30 FPS, enabling real-time performance on an RTX 4070 GPU.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157006</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soil moisture-based drought monitoring using remote sensing over Africa</title>
<link>https://hdl.handle.net/1721.1/157005</link>
<description>Soil moisture-based drought monitoring using remote sensing over Africa
Lu, Catherine S.
Agricultural droughts, or persistent deficits in soil moisture, can have severe consequences on crop production and can result in economic crisis and widespread food insecurity. The impacts of drought are especially relevant in Africa, where agriculture is largely supported by rainfall. Currently, drought monitoring systems for Africa are not as prevalent on the continental scale and are limited in the number of in-situ observations for model validation, in contrast to developed regions. In this study, we use soil moisture data gathered from the Soil Moisture Active Passive (SMAP) mission with dates ranging from April 2015 to December 2023, in order to develop a drought monitoring system that incorporates seasonality and climatology. Monthly drought thresholds are developed based on percentiles of soil moisture found in previous literature, creating location-specific thresholds of drought for each month. This data was applied at the continental, regional, and country level to reconstruct historical records of drought throughout the SMAP time record (time series) and localities of drought intensities for a given time period (drought maps). Additionally, a methodology of exponential time filtering is explored to convert surface soil moisture from SMAP into root-zone soil moisture, which can be more relevant for agricultural production. The reconstructed historical drought results align with literature on drought events in regions of Africa (e.g. 2017-18 drought anomalies). For future events, this study could inform drought monitoring through remote sensing and allow for measures of drought response to improve overall food security.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157005</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Relationship between Linguistic Representations in Biological&#13;
and Artificial Neural Networks</title>
<link>https://hdl.handle.net/1721.1/157004</link>
<description>The Relationship between Linguistic Representations in Biological&#13;
and Artificial Neural Networks
Kauf, Carina
Research in cognitive neuroscience strives to understand the representations and algorithms that support human cognition, including language. The scientific tools for investigating human-unique capacities, such as language, have long been limited. For example, we do not have the option to learn about the neural circuits that support these capabilities by studying simpler systems than the human brain, such as animal models. However, recent advances in engineering have provided new tools for studying language: artificial neural network language models (LMs), which exhibit remarkable linguistic capabilities and are fully intervenable. In this thesis, I draw on these advances to shed light on language processing in the human brain.&#13;
&#13;
Of course, comparisons between LMs and the human language system face challenges. I argue that in order to evaluate the suitability of LMs as cognitive models of language processing, we need to better understand (i) how linguistic stimuli are encoded in the internal representations of LMs (ii) how linguistic stimuli are encoded in the language-selective cortex of humans, and (iii) whether and how we can meaningfully relate linguistic representations from these two systems to each other. This thesis work makes progress on all three questions by combining evidence from neuroimaging, behavioral research, and computational modeling. First, I analyze whether LM representations of linguistic stimuli encode information about semantic plausibility. I find that LMs acquire substantial but inconsistent plausibility knowledge and that their judgments are influenced by low-level features of the input, making them good models of human language processing but unreliable models of world knowledge. Then I use fMRI to probe the computations that drive the language network’s response. I find evidence for a generalized reliance of language comprehension on syntactic processing, contra claims that language comprehension relies on shallow/associative processing, and for only a superficial encoding of sentence meaning. Finally, I systematically investigate what aspects of language inputs are critical for LM-to-brain alignment. I find that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and that this alignment is mainly driven by representations of word meanings rather than sentence structure. Taken together, this thesis provides evidence that the core language network encodes semantic information only superficially, implying that naturalistic human language processing must rely on the interaction of multiple tightly interconnected systems, and argues that – in spite of their limitations – LMs can help improve our understanding of human language processing through the interplay of in-silico modeling and human experiments.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157004</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-economic Analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants</title>
<link>https://hdl.handle.net/1721.1/157003</link>
<description>Techno-economic Analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants
Araiinejad, Layla
This thesis presents the techno-economic analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants (FPP), tailored to enhance the economic viability and scalability of FPPs in response to global energy challenges and climate change. Amidst a backdrop of substantial investments in fusion technology, totaling $6.2 billion to date, this study critically assesses the overnight capital costs of a FPP that hosts ARAI, a 350 MWe tokamak reactor based on the MIT ARC fusion concept. This research evaluates the economic viability of constructing an Nth-of-a-kind ARAI-FPP. The overnight capital costs for ARAI-FPP are estimated to range between $8,800/kW and $22,200/kW, with this variation largely driven by differing regulatory and manufacturing assumptions. The overall cost breakdown is found to be similar to past and recent fusion literature, where the direct cost of fusion reactor equipment is the largest cost driver. The Levelized Cost of Electricity is estimated to be between $140/MWh and $550/MWh. The findings aim to deepen the understanding of absolute and relative cost drivers in fusion energy and suggest strategies to improve its economic feasibility. The analysis highlights the significant role of fabrication costs and regulatory frameworks in influencing cost dynamics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157003</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hidden Influence in Dynamic Networks</title>
<link>https://hdl.handle.net/1721.1/157002</link>
<description>Hidden Influence in Dynamic Networks
Erhardt, Keeley Donovan
Our world is structured by networks that connect objects, ideas, and people. These networks consist of nodes (entities) and edges (connections) that dynamically evolve, reflecting changes in relationships, the emergence of new entities, and the dissolution of old links. Unlike static networks, which offer a snapshot of connections at a specific time, dynamic networks allow for modeling processes and system-level changes over time. These changes shed light on the evolution of social interactions, digital communications, financial transactions, and other networked data. Leveraging mathematical and statistical models, including neural network techniques, this research delves into the hidden influence that weaves through seemingly unrelated, yet intrinsically connected, entities in online social and financial networks. I begin with a foundational overview of graph learning techniques and the specific models utilized in my work. The body of this dissertation is divided into three core sections. The first examines the orchestration of influence campaigns by state-backed entities on social media, utilizing the influence model to unravel the complex interactions among networked Markov chains based on temporal activity patterns. Next, I quantitatively analyze the shifting geopolitical relationships and digital diplomacy efforts between two nation-states, employing a node representation learning strategy. Lastly, I apply a geometric deep learning framework to uncover connections between cryptocurrency wallets, analyzing transaction patterns and temporal dynamics to identify underlying networks. By introducing innovative approaches that leverage probabilistic and deep learning techniques to analyze dynamic networks, this dissertation contributes valuable insights and methodologies with significant implications for diverse domains such as cybersecurity, financial technology, and communications infrastructure.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157002</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Force Dynamics of the Rat Lateral Gastrocnemius Muscle after Undergoing Sensory Protection</title>
<link>https://hdl.handle.net/1721.1/157001</link>
<description>Force Dynamics of the Rat Lateral Gastrocnemius Muscle after Undergoing Sensory Protection
Gutierrez Arango, Samantha
The sensory protection procedure, involving the reinnervation of a motor-denervated muscle with a sensory nerve, has shown promise in preserving muscle function and structure. This thesis investigates the impact of sensory protection on the force dynamics and muscle architecture of the lateral gastrocnemius muscle in a rat animal model. Using a within-subjects experimental design, this preliminary study compared Sensory Protected and contralateral Intact muscles within a cohort of four rats. In situ ergometry experiments suggest that normalized Force-Velocity-Power (FVP) properties may be largely preserved after sensory protection, with small percent differences in normalized FVP curves between the Sensory Protection muscles versus contralateral muscle controls. Key FVP parameters such as peak velocity and specific peak power exhibited higher percent differences for the Sensory Protected muscles, but lower percent differences in pennation angles and physiological cross-sectional area, suggesting that sensory reinnervation may influence muscle structure and fundamental force dynamics. Despite limitations, such as the small sample size, the study lays the groundwork for future research investigating the cellular and molecular mechanisms underlying the observed changes. The findings highlight the potential of Sensory Protected muscles as biological actuators in prosthetic devices, and suggest that sensory reinnervation may be a promising strategy to maintain or restore muscle function in individuals with motor impairment.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157001</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Characterization of a Novel, Low-Cost Method for Measurement of Volatile Organic Compounds</title>
<link>https://hdl.handle.net/1721.1/156999</link>
<description>Development and Characterization of a Novel, Low-Cost Method for Measurement of Volatile Organic Compounds
Gao, Amanda
Measurements of atmospheric pollutants are crucial for improving our understanding of atmospheric chemistry, managing air quality, and estimating exposure to compounds that have profound impacts on human health. Low cost sensors (LCS), due to order-of-magnitude reductions in power usage, maintenance needs, and purchase cost compared to research-grade reference instruments, have the potential to greatly expand the spatiotemporal resolution of these measurements. While there are several commercially-available LCS that can measure environmental volatile organic compounds (VOCs), an important class of hazardous pollutants, these sensors can only make non-specific “broadband” measurements and have, to date, been underutilized in research. &#13;
&#13;
This thesis describes the development, characterization, optimization, and use of a novel low-cost instrument for measuring environmental VOCs. This instrument utilizes an array of low-cost VOC sensors representing three fundamentally different sensor types.  It also takes advantage of user-controlled parameters that achieve greater degrees of differentiation between responses of sensors with the same measurement type. In the first part of this work, we describe the instrument itself, as well as a laboratory study that characterizes sensor responses to environmentally relevant VOCs. Though environmental applications pose unique challenges that can’t be completely addressed in the laboratory, our results demonstrate that this instrument can give quantitative, chemically specific information about VOCs.&#13;
&#13;
The second part of this work is based on measurements made as part of a collaborative indoor air quality campaign, where our low-cost VOC instrument and co-located reference monitors made measurements of realistic indoor VOC sources. Results from an LCS-derived matrix factorization analysis were compared to an independent factor analysis of reference VOC measurements, demonstrating that our uncalibrated low-cost data can provide quantitative and qualitative information about VOC sources and composition. Based on this comparison analysis, we describe a procedure for sensor selection that allows us to evaluate the relative importance of specific sensors or sensor types in providing information about VOC composition and sources, helping future similar LCS array applications to avoid measurement redundancies and minimize material cost. &#13;
&#13;
Overall, the results from this thesis show that this LCS instrument can provide useful, quantitative information about VOC sources and composition at a fraction of the size and cost of a research-grade instrument--opening the possibility of widespread and spatially distributed measurements of VOCs in air quality and chemistry contexts, especially for indoor air.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156999</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translations: Designing Restorative Listening Experiences in the Age of Social Fragmentation</title>
<link>https://hdl.handle.net/1721.1/156998</link>
<description>Translations: Designing Restorative Listening Experiences in the Age of Social Fragmentation
Obeng-Marnu, Naana
This thesis builds on a body of sociotechnical research at the MIT Center for Constructive Communication that draws upon "ancient wisdoms" of dialogue and listening and harnesses the power of technology to inform the design of dialogue  spaces that promote deep, meaningful, and authentic conversations. Our approach hinges on the belief that society functions best when we hear and understand each other, an outcome that our work strives to advance by exposing people to the personal stories of others in ways that connect rather than divide. I take inspiration from anthropological practices and recent Data Humanism and Activism epistemologies to derive a set of design considerations for restorative interfaces. These principles inform the development of Translations, an interactive experience that invites audiences to more deeply engage with a curated collection of stories surfaced during small group facilitated conversations. The design of this visual and auditory experience allows audiences to explore stories they may otherwise not hear through websites that center thematic summaries and high level insight visualizations. The selected stories are curated using AI emotion analysis and sensemaking which are leveraged to draw the user’s attention to moments of interest across conversations, such as moments of affirmation. The efficacy of this curation method to engender empathy and emotional disruption, precursors to restorative listening, is evaluated and the results from user tests for and interviews about the overarching interface are discussed. Ultimately, this thesis presents both a framework for automatic curation of audio narratives as well as an interactive interface to present these selected stories, both of which have wide-ranging applications in the media and civic space.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156998</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-Efficient Real-Time Hardware Acceleration for Gaussian Fitting</title>
<link>https://hdl.handle.net/1721.1/156997</link>
<description>Energy-Efficient Real-Time Hardware Acceleration for Gaussian Fitting
Wojtyna, Adrianna D.
Micro-robots play an important role in numerous tasks, including search and rescue, exploration, and navigation. A significant challenge to their deployment is their limited energy capacity, which constrains the computation such systems can complete. Specifically, 3D mapping algorithms significantly contribute to the compute power footprint as a result of repeated memory accesses. A promising approach involving Gaussian Mixture Models (GMMs), Single-Pass Gaussian Fitting (SPGF) algorithm, allowed for real-time 3D mapping with minimal memory and energy requirements due to its single-pass processing of input data. To further decrease demonstrated energy results, we propose the design of an FPGA (Field Programmable Gate Array)-based hardware accelerator that enables Gaussian fitting based on the SPGF algorithm with 10.4× lower energy per image (based on post-implementation power analysis), compared to the original, software implementation. By using fixed-point numerical representation and concurrent processing of data inputs, our proposed hardware accelerator, when operating at 100MHz, is capable of processing depth images at an average rate of 303.09 frames per second (fps), allowing for 7.97× improvement compared to the original software implementation of SPGF (32fps). We also demonstrated 46.1× lower average FPGA resource utilization compared to the previously proposed hardware accelerator for GMMs. Our proposed design was demonstrated as part of the complete subsystem, allowing for visualization of the constructed map in real-time. The proposed design was demonstrated to perform at 100MHz in isolation and verified for its performance with a 50MHz subsystem on AMD Virtex UltraScale+ VCU118 FPGA.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156997</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation of County-Level Evapotranspiration and Irrigation using &#13;
High-Resolution Planet Satellite Data</title>
<link>https://hdl.handle.net/1721.1/156996</link>
<description>Estimation of County-Level Evapotranspiration and Irrigation using &#13;
High-Resolution Planet Satellite Data
Wickman, Sydney
Increased agricultural production has spurred the need for irrigated land in areas that may not be supported by surface water. Instead, groundwater is primarily used for irrigation in states such as Kansas to supplement the water needed for this land. The increase in groundwater use for irrigation may be contributing to areas of increasing groundwater decline, and more precise tracking of irrigation should take place on a larger, regional scale. This will allow for more effective tracking of irrigation trends and their possible effects. This thesis tests the challenges and possibilities of applying the Backward-Averaged Iterative Two-Source Surface temperature and energy balance Solution (BAITSSS) model with high-resolution PlanetScope (Planet) satellite data to the county of Cheyenne, Kansas. The drop of reflectance data observed in fields from Planet satellite data was used as a signal for the first irrigation event, and the model subsequently ran from there. The results from this demonstrate that the BAITSSS evapotranspiration (ET) is comparable to the OpenET model; BAITSSS overall estimates a higher ET in agricultural areas compared to OpenET. However the irrigation results are underestimated, but there are many limiting factors that could be adjusted with further consideration. More research should be conducted toward the efficient and effective running of the BAITSSS model on a larger region.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156996</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A bone-anchored mechanoneural knee prosthesis to enhance control and embodiment</title>
<link>https://hdl.handle.net/1721.1/156995</link>
<description>A bone-anchored mechanoneural knee prosthesis to enhance control and embodiment
Shu, Tony
To maximally utilize the peripheral nervous system for prosthetic control, it is necessary to first understand the compounded errors induced by amputated physiology before developing the appropriate interfacing technologies to extract any latent movement information. Through this work, I develop a foundational approach to amputation interventions and artificial interfaces applied toward neurorobotic control at the transfemoral level. The first part of this dissertation explores the neurophysiological and neuromechanical outcomes of a revisional transfemoral amputation that restores agonistantagonist muscle dynamics. A within-subjects study is performed to investigate changes in muscular function and cortical activity as a result of the intervention. Through these data, I provide evidence that extant amputated musculature can be modified to restore functionality for the purpose of efferent neurorobotic control. The second part of this dissertation explores a combined implementation of the revisional transfemoral amputation with a bone-anchored, or osseointegrated, transfemoral implant and chronically-implanted intramuscular electrodes. The clinical outcomes of the combined transfemoral platform are quantified through biophysical measurements and measurements of the stability of the implanted hardware to suggest the potential for bidirectional neurorobotic interfacing. The third part of this dissertation compares cohorts of persons with amputation possessing varied muscle architectures and physical interfacing configurations on the ability to produce physiological neurorobotic knee dynamics. Two subjects with the novel transfemoral platform are compared to the other cohorts without individual aspects of the platform, demonstrating unprecedented agility and sustainment of prosthetic embodiment in the process.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156995</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing a Smartwatch App for Automated Targeted Memory Reactivation</title>
<link>https://hdl.handle.net/1721.1/156994</link>
<description>Designing a Smartwatch App for Automated Targeted Memory Reactivation
Podrug, Anita
Targeted Memory Reactivation (TMR) experiments have shown potential in enhancing learning and memory by pairing sensory stimuli with specific memories during learning and reintroducing these stimuli during slow-wave sleep. This process aids in memory consolidation, where recent neural representations are reactivated and transferred to long-term storage. Traditionally, TMR has been limited to laboratory settings. For my thesis, I developed a TMR system usable at home and investigated the effectiveness of this system on memory recall of a nature documentary, using vibration as a stimulation cue. I developed a machine-learning model that performs sleep stage classification from heart rate and motion data that can be collected from a smartwatch in real-time. Using this model, the smartwatch was programmed to deliver TMR cues when participants enter stage N3 (slow-wave) sleep. This TMR system was found to improve recall 24 hours and 1 week after the initial learning, but the results were not found to be statistically significant due to an insufficient amount of data. Further studies would be required to confirm these results. This advancement of at-home TMR can be extremely useful for further understanding sleep’s role in memory and can provide a system to be used by the general public for improving their learning and memory. Additionally, the development of an automated real-time sleep-stage classification model can enable more reliable and better quality experiments to be used on a variety of sleep studies in the future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156994</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Extracting and Analyzing Political Content on TikTok</title>
<link>https://hdl.handle.net/1721.1/156993</link>
<description>Methods for Extracting and Analyzing Political Content on TikTok
Fadel, Marie Diane
In this thesis, I investigate the dynamics of political discourse on TikTok, with a focus on crafting a comprehensive methodology for extracting and analyzing political content related to the 2024 U.S. Presidential Election. This research utilizes a blend of advanced computational tools and crowd-sourced evaluations to delve into the mechanisms through which political influence is both exerted and perceived on the platform. For data collection, the study employed TikAPI, a tool designed for systematic scraping of TikTok videos, which targeted specific political hashtags to amass a substantial dataset. This dataset was analyzed using a variety of innovative methods, including snowball sampling to ensure a representative range of political engagement, and integration with Python to automate the data collection process. Additionally, I utilized Large Language Models (LLMs) to evaluate the relevance and persuasive impact of the content, and these machine-generated insights were then benchmarked against human judgments. Overall, the findings indicate a slight preference for Republican discourse on TikTok. Moreover, I demonstrate that OpenAI’s GPT can effectively classify videos by topic, although human input remains essential for more nuanced tasks such as stance detection and evaluation of persuasive effect. This exploration into the political landscape of TikTok represents one of the first of its kind, with the primary aim of this thesis being to develop a methodology that will support future research in this field.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156993</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Developmental Change in Ego-Motion Experience Across Infancy</title>
<link>https://hdl.handle.net/1721.1/156992</link>
<description>Exploring Developmental Change in Ego-Motion Experience Across Infancy
Fuchs, Ariel
Humans flexibly and intuitively use vision to plan and guide navigation through the local environment. How does this ability develop in infancy? One possibility is that the development of visual representations for navigation is driven by passive exposure to the visual statistics of scenes. Another possibility is that active navigation experience using vision to plan and guide locomotion is the driving factor. In order to distinguish between these two hypotheses, it is necessary to understand the nature of infants’ early visual scene experience itself. Surprisingly little prior work has characterized infants’ early experiences with ego-motion through scenes, before and after learning to locomote. We use ecological momentary assessments to quantify infants’ exposure to ego-motion through scenes, and how that changes with locomotor experience. We found that pre-crawling infants who have never independently navigated already experience significant passive visual exposure to forward-facing ego-motion through scenes. Nevertheless, this experience increases substantially with age and locomotor status.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156992</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual Predictability and Phonetic Reduction</title>
<link>https://hdl.handle.net/1721.1/156991</link>
<description>Contextual Predictability and Phonetic Reduction
Martin, Kinan R.
Phonetic reduction is a process which alters the acoustic quality of a sound, often a vowel or word, to a perceived weaker or shorter state. Previous research suggests that the degree of reduction of a word is influenced by the contextual predictability of words in the context. However, the nature of how the context direction and size governs phonetic reduction has not been thoroughly explored. The advancement of self-supervised language models provides a means to assign meaningful estimates of word predictability conditioned on different contexts. This paper explores the effect of contextual predictability on phonetic reduction making use of such models. We train instances of GPT-2 on different context directions (past, future, and bidirectional) and context sizes (bigram vs. sentence) to provide measures of conditional word predictability, then use linear regression to quantify their correlation with a measure of phonetic reduction (word duration). Our results provide evidence suggesting that the contextual probability of a word given the following context correlates with word duration more strongly than the past context and the bidirectional contexts for both context sizes, suggesting that phonetic reduction may be a reliable indicator of reduced cognitive load in a speaker’s planning of the rest of an utterance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156991</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generation, Detection, and Evaluation of Role-play based Jailbreak attacks in Large Language Models</title>
<link>https://hdl.handle.net/1721.1/156989</link>
<description>Generation, Detection, and Evaluation of Role-play based Jailbreak attacks in Large Language Models
Johnson, Zachary D.
While directly asking a Large Language Model (LLM) a harmful request (e.g. "Provide me instructions on how to build a bomb.") will most likely yield a refusal to comply due to ethical guidelines laid forth by developers (e.g. OpenAI), users can trick the LLM into providing this information using a tactic called a Role-play based Jailbreak Attack. This attack consists of instructing the LLM to take on the role of a fictional character that does not adhere to the model developer’s ethical guidelines and will comply with any request. Role-play based jailbreak attacks remain a critical safety issue and open-ended research question due to their success in getting a LLM to comply with a harmful request, as well as their ability to be generated without a formal technical background. Companies such as OpenAI employ manual tactics like red-teaming in order to enhance a LLM’s robustness against these attacks, however these tactics may fail to defend against all role-play based jailbreak attacks due to their potentially limited ability to predict unseen attacks. In this work, we aim to better understand the landscape of role-play based jailbreak attacks so that we can precisely detect these attack attempts in the wild before they yield a harmful output from a LLM. Specifically, we focus on three main categories: generating synthetic examples of role-play based jailbreak attack prompts, testing these role-play prompts on a target LLM in order to evaluate whether they successfully jailbreak the LLM and labeling our prompts accordingly, and training a robust detection model that can precisely predict whether a role-play prompt will successfully yield a jailbreak attack in a LLM before being fed any malicious requests. Through these processes, we learn the following information, respectively. 1) Out-of-the-box models such as GPT-4 are effective at generating successful role-play jailbreak attack prompts when being generated on just a few examples via fewshot prompting. 2) We can automatically classify LLM responses as jailbroken or not with high accuracy using statistical methods including Principal Component Analysis (PCA) and Support Vector Machines (SVMs). 3) Most classification architectures are unable to perform the complex task of accurately predicting whether a role-play prompt will successfully yield a jailbreak attack. By better understanding the nature of role-play based jailbreak attacks, we hope to be able to contribute to the research area of jailbreak attack detection in LLMs so that they can be robustly defended against in the future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156989</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Benchmarking Graph Transformers Toward Scalability for Large Graphs</title>
<link>https://hdl.handle.net/1721.1/156988</link>
<description>Benchmarking Graph Transformers Toward Scalability for Large Graphs
Lim, Katherine S.
Graph transformers (GTs) have gained popularity as an alternative to graph neural networks (GNNs) for deep learning on graph-structured data. In particular, the self-attention mechanism of GTs mitigates the fundamental limitations of over-squashing, over-smoothing, and limited expressiveness that GNNs face. Furthermore, like transformers used for natural language processing and computer vision, GTs have the potential to become foundation models that can be used for various downstream tasks. However, current GTs do not scale well to large graphs, due to computational cost. Here, we formulated a GT architecture as part of a larger scheme to build a GT made scalable through hierarchical attention and graph coarsening. Specifically, our goal was to optimize the GT building block of the scalable GT. By adding GraphGPS-inspired message-passing neural network (MPNN) layers to a modified version of the Spectral Attention Network (SAN) and performing hyperparameter tuning, we built a GT architecture that performs comparably to GraphGPS on the node classification task on the Cora and CiteSeer datasets. Compared to the modified version of SAN that we started with, our architecture is faster to train and evaluate, and also obtains higher node classification accuracies on the Cora and CiteSeer datasets. Our results demonstrate how message passing can effectively complement self-attention in GTs such as SAN to improve node classification performance. With further architectural improvement, we expect our model to serve as an effective building block for scalable GTs. Such scalable GTs may be used for node classification on large graphs, a common task for industrial applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156988</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unsupervised Learning for Generative Scene Editing and&#13;
Motion</title>
<link>https://hdl.handle.net/1721.1/156987</link>
<description>Unsupervised Learning for Generative Scene Editing and&#13;
Motion
Fang, David S.
Unsupervised learning for images and videos is important for many applications in computer vision. While supervised methods usually have the best performance, the amount of data curation and labeling that supervised datasets require makes it difficult to scale. On the other hand, unsupervised learning is more scalable, generalizable, and requires much less data curation, but is harder because it lacks a clear target objective. In this thesis, we propose two distinct lines of unsupervised learning work with generative applications: 1) BlobGSN and 2) optical flow estimation and flow generation with diffusion models. BlobGSN explores the unsupervised learning of spatially disentangled mid-level latent representations for 3D scenes in a generative context. Within this generative framework, we show that BlobGSN facilitates novel scene generation and editing. In a different vein, current state-of-the-art optical flow learning models rely on ground truth data collection for sequences of frames in videos. Unsupervised learning of optical flow, which would not require ground truth data, could theoretically leverage any publicly available video data for training. We explore different frameworks for unsupervised optical flow learning to tackle different problems such as photometric error, occlusion handling, and flow smoothness. Additionally, we propose a generative framework for generating optical flow from a single frame that can be trained in an unsupervised manner.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156987</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpretable Computational Modeling of pre-mRNA Splicing for Multiple Eukaryotic Species</title>
<link>https://hdl.handle.net/1721.1/156986</link>
<description>Interpretable Computational Modeling of pre-mRNA Splicing for Multiple Eukaryotic Species
McCue, Kayla M.
One of the key steps in eukaryotic gene expression is pre-mRNA splicing, whereby intronic sequences are excised from immature pre-mRNA transcripts and the remaining exonic sequences are joined together. This process is catalyzed by the spliceosome, a large complex of proteins and RNAs. A variety of RNA sequence features influence this process, including the core splice site (SS) motifs and splicing regulatory elements (SREs), which recruit protein splicing factors. Together these RNA elements and factors form an intricately interconnected regulatory system which is still incompletely understood. In this thesis, I describe SMsplice, an interpretable computational model of splicing that seeks to improve the understanding of how sequence elements influence the splicing pattern of pre-mRNA transcripts in a variety of eukaryotic organisms. SMsplice incorporates three key aspects of the splicing process: scores of potential SS motifs, scores of SS-proximal hexamers representing SREs, and structural preferences of the spliceosome for particular exon and intron lengths. We iteratively learn the SRE scores within this framework and assess performance by comparing the predicted splicing pattern of a transcript to a canonical pattern to calculate the F1 score, the harmonic mean of precision and recall. Our best-performing SRE scores yield performances of 70% in human, 73% in mouse, 86% in zebrafish and Drosophila melanogaster, 83% in silkworm moth, and 85% in Arabidopsis thaliana. Applying SMsplice to multiple organisms enables a variety of evolutionary analyses. Comparing the relative contributions of the SS scores, SRE scores, and the structural preferences revealed an increased dependence on SREs in lineages with longer introns, particularly mammals. Exonic regulatory information flanking real versus decoy SS was on average more discriminative than intronic regulatory information for all metazoans studied. In Arabidopsis, intronic and exonic SREs played comparable roles, suggesting a greater role for intronic information in plants compared with animals. Motifs generated from the hexamers with the strongest SRE scores recapitulated known splicing regulator binding sites in multiple organisms, and a majority of the human motifs were significantly associated with splicing quantitative trait loci, including novel as well as known motifs. Furthermore, many of these motifs are common to all of the organisms tested, suggesting that aspects of splicing regulation are deeply conserved. This notion was further supported by the observation that using the SRE scores learned for one organism within the SMsplice model for another organism generally performed well. A notable exception was that SRE scores learned in mammals performed fairly well in non-mammals, but not vice versa, which may reflect the evolution of mammalian-specific splicing regulation alongside the lengthening of introns. This thesis demonstrates the utility of interpretable models of splicing, which allow for comparative analyses of features between organisms.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156986</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Battery Blueprint: Saudi Arabia’s Strategic Foray into the Battery Value Chain</title>
<link>https://hdl.handle.net/1721.1/156985</link>
<description>Battery Blueprint: Saudi Arabia’s Strategic Foray into the Battery Value Chain
Alhakbani, Alanoud
This thesis evaluates Saudi Arabia’s potential to establish a foothold in the global battery industry, an industry that would be pivotal for its energy transition and economic diversification goals. Key enablers such as Saudi Arabia’s commitment to renewable energy and industrial growth in adjacent sectors, including automotive and refinery, provide a foundation for entry into the battery value chain. However, the Kingdom must navigate barriers such as market competition and the need for technological expertise in advanced battery production, a market led by heavyweights like China and innovators across the globe. This study assesses the viability of a bottom-up technology catch-up approach for industrial competency in battery technology—a contrast to the top-down models employed by established players. The research comprises an in-depth analysis of enablers and barriers for technology catch-up utilizing a proposed assessment framework, and strategies for effectively localizing different parts of the battery value chain. The outcome aims to offer a strategic blueprint for Saudi Arabia to capitalize on the burgeoning demand for battery technology and enhance its global economic stature.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156985</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cracking Common Notions Relating Egg Strength to Impact&#13;
Orientation</title>
<link>https://hdl.handle.net/1721.1/156984</link>
<description>Cracking Common Notions Relating Egg Strength to Impact&#13;
Orientation
Sutanto, Antony
The chicken egg possesses a shell structure that is conventionally thought to be strongest when loaded on its vertical poles, particularly the sharp end, which resembles a structural arch. This notion has influenced educational activities such as the "egg drop challenge", where participants typically orient the egg with its sharp end facing downwards to improve its chances of resistance to fracture upon impact. This study tests this conventional wisdom by investigating the egg's strength, or energy sustained before rupture, depending on its orientation. First, static compression tests were conducted to determine the maximum energy absorbed by the egg based on its compression axes. Eggs yielded greater deformations and energy absorbed before rupture when compressed horizontally rather than vertically, suggesting potential advantages under dynamic loading conditions. To validate that these trends also held under dynamic loading, drop tests from varying heights were performed to assess the kinetic energy required to fracture the egg. Contrary to intuitive understanding, eggs dropped on their equators could undergo greater drop heights without rupturing compared to those dropped on their vertical poles. This unexpected finding challenges the prevailing notion of the egg's structure and suggests a new perspective on its impact behavior.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156984</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enforcing Identification and Authentication Policies at Scale in a Cloud Microservices Architecture</title>
<link>https://hdl.handle.net/1721.1/156983</link>
<description>Enforcing Identification and Authentication Policies at Scale in a Cloud Microservices Architecture
Sinha, Varnika
As cloud adoption increases, cloud providers are competing to build more robust and secure platforms to keep growing and attract more users by ensuring their data is highly available but not susceptible to malicious attacks. Many cloud platforms are distributed systems based on a microservices architecture where many services communicate with one another. Communication among services should be authenticated to implement security in depth and not just rely on the security of networks and infrastructure. However, these services can be on the order of hundreds or thousands, which increases the number of specialized secrets needed to provide authentication. This means that systems like these involve a large number of secrets. These large numbers of secrets are hard to manage and track in the case of exposure, which leads to a risk of misconfiguration and leaks. We implement a framework that accounts for these secrets by managing the creation, rotation, and deletion in accordance with the existing architecture of the platform with a Kubernetes custom resource and controller and ensure that a secret with the correct permissions is always present when needed.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156983</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hollywood Workers vs Tech: In Theory and In the News</title>
<link>https://hdl.handle.net/1721.1/156982</link>
<description>Hollywood Workers vs Tech: In Theory and In the News
Cmehil-Warn, Christian
The 2023 SAG-AFTRA and WGA strikes on Hollywood were notable because of their explicit ties to technology and labor’s changing relationships. In particular, disputes around using generative AI in the workplace were widely reported in the news. This thesis examines the Hollywood strikes in two parts. The first part takes a political economy approach to examine the underlying causes of these changes in technology-labor relations. In particular, the thesis argues that an industry shift to distribution via streaming services alongside increased vertical integration brought about new imperatives to production and exponentially increased levels of data capture, enabling the labor conditions that led to the strike. Theories of creative labor and technology-labor relations are used to describe the tensions. The resulting SAG-AFTRA and WGA collective bargaining agreements are then examined within these framings. The second part of the thesis quantitatively explores the relationship between news media (which its own complex relationship with technology) and the Hollywood strikes using natural language processing techniques. Sentiment analysis and sentence embeddings are used to quantify and compare news articles across different characteristics. The results of the analysis are inconclusive.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156982</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized Data Markets</title>
<link>https://hdl.handle.net/1721.1/156981</link>
<description>Decentralized Data Markets
Lu, Charles
Acquiring access to massive amounts of data has become fundamental to state-of-the-art artificial intelligence systems. However, as data value increases, data owners have challenged current norms and practices of data acquisition. Data marketplaces have been promoted to fairly compensate data producers and incentivize greater data sharing. In this thesis, I describe a decentralized model of data markets to overcome privacy concerns in siloed, data-limited domains such as healthcare. I propose two federated techniques to automatically select a subset of data sellers and datapoints for a buyer given some sample data. I also examine the socio-technical implications of emerging data markets for medical data and synthesize ethical principles for medical data marketplaces. Decentralized data markets have the potential to enable new AI economies through more robust, transparent, and participatory data sharing platforms. Through the contributions in this thesis, I hope to make a positive step towards realizing a future where transformative data-enabled technologies such as general-purpose machine learning systems are developed more responsibly and the benefits are distributed more equitably.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156981</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contesting Design: Ancestral Technology as Portal to Post-Design(s)</title>
<link>https://hdl.handle.net/1721.1/156980</link>
<description>Contesting Design: Ancestral Technology as Portal to Post-Design(s)
Reynolds-Cuéllar, Pedro
Nowadays, designers and technologists are constantly exposed to increasingly technocentric views of the future, primarily fueled by dominant ideologies—scalability, universal applicability, and profit, among others. Many of these future makers are preparing in the present, often at institutions reproducing these ideologies. However, this established understanding of what technology is and what is worthy of design is currently being challenged. Literature and practice connecting with ways of knowing and doing outside this dominant lens are rising in both technology and design studies. Alternative design programs at higher education institutions, preparing students for a world where technology is de-centered, and grassroots initiatives building futures through Indigenous technology are some of the ways in which these techno-narratives can be contested. This dissertation joins these efforts by foregrounding —and moving into practice— alternative ways to teach design and think about technology.&#13;
I start by exploring the value distribution from participatory design initiatives across participants and introduce a model for longitudinal assessment of these programs. Using the findings and insights from this study, I propose and implement two largely immersive university courses on technology design in close collaboration with rural collectives in Colombia. In contributing to methodological shifts within participatory design, I foreground connections at its intersection of Indigenous research methods. In giving a language to these proposals, I advance the notion of ‘Ancestral Technology’ as an alternate framework to approach technology design. It is a form of world-making (design) that primarily supports cultural cohesion, is rooted in bounded geography, and has a history living through collective memory. As designers and technologists interested in helping build a future outside the techno-centric imaginary, we must connect to the ancestral.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156980</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imagining the Voltage of Neurons Distributed Across Entire Brains of Larval Zebrafish</title>
<link>https://hdl.handle.net/1721.1/156979</link>
<description>Imagining the Voltage of Neurons Distributed Across Entire Brains of Larval Zebrafish
Wang, Zeguan
Neurons interact in networks distributed throughout the brain. Although much effort has focused on whole-brain calcium imaging, recent advances in genetically encoded voltage indicators (GEVIs) raise the possibility of imaging voltage of neurons distributed across brains. However, due to the high imaging speed and signal-to-noise ratio requirements of GEVIs, microscopy hardware to date has only been able to image the voltage of neurons within subregions of the brain, even for small animals like the larval zebrafish. To address this challenge, this thesis presents a high-speed remote scanning light-sheet microscope capable of imaging GEVI-expressing neurons distributed throughout entire brains of larval zebrafish at a volumetric rate of 200.8 Hz. The microscope combines remote refocusing and an ultrafast dual-camera system to significantly enhance the scanning and acquisition speed of light-sheet microscopy. Using this microscope, we measured voltage of ~1/3 of the neurons of the larval zebrafish brain, distributed throughout. We observed that neurons firing at different times during a sequence were located at different brain locations, for sequences elicited by a visual stimulus, which mapped onto locations throughout the optic tectum, as well as during stimulus-independent bursts, which mapped onto locations in the cerebellum and medulla. Whole-brain voltage imaging may open up frontiers in the fundamental operation of neural systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156979</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Policy, People, and Place Impacts of Mining for the&#13;
Clean Energy Transition in the US</title>
<link>https://hdl.handle.net/1721.1/156978</link>
<description>The Policy, People, and Place Impacts of Mining for the&#13;
Clean Energy Transition in the US
Randall, Abigail Marie
To meet the growing demands of the energy transition, we need to rapidly deploy mines to supply the minerals for clean energy technologies. This presents a set of challenges, or tensions, at the energy transition level, policy level, and mine level. This thesis seeks to answer two questions: What are the tensions for mining in the US? How do we decide where to permit these mines given the realities of environmental and community impacts? To address the tensions at the energy transition level, I establish copper, cobalt, nickel, and lithium, or energy transition minerals, as the focus of this thesis. Then, to address policy tensions, I conducted a geospatial analysis and found that 38% of the US’ energy transition mineral resources are on or near difficult to permit lands, with 92.7% of those resources being copper. To understand how these tensions play out in practice, I created three case studies through a series of interviews and pulling public comments. The first case study is of the East Boulder and Stillwater Mines. In this case, stakeholders came together to form a Good Neighbor Agreement, or a legally binding contract between the mine owner and grassroots community organizations. The agreement is an adaptable framework for mine decision making, which shows how stakeholders can work creatively within the tensions of mining for the energy transition. The second case, the Twin Metals Minnesota Case Study, shows how political tensions can introduce risk and uncertainty in the mine permitting process and prevent a mine from moving forward. The third is an Indigenous lands case study centered around the Thacker Pass lithium mine, which illustrates how a tensions framing is critical when the tradeoff framing has historically risked Indigenous sovereignty over their lands. The identified tensions flow into the policy recommendations, which are to: 1. Replicate solutions that maximize gains to stakeholders, 2. Rely on currently underutilized policy options to increase transparency and consolidate review in the permitting process, and 3. Look downstream in the energy transition to learn from newer industries. Taken all together, this thesis tells a story of what types of mines need to be deployed in the US to meet the needs of the clean energy transition, whether and where mines can be deployed under current policy constraints in the US, and how mines are deployed in practice.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156978</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unlocking Collective Intelligence in Decentralized AI</title>
<link>https://hdl.handle.net/1721.1/156977</link>
<description>Unlocking Collective Intelligence in Decentralized AI
Gupta, Gauri
In the current evolving digital landscape, vast repositories of data and knowledge often remain siloed and untapped due to privacy concerns and centralized control. Thus, despite the transformative potential of artificial intelligence, its utilization in societal sectors lags behind other industries. For example in healthcare, data privacy and lack of incentives and trust in the system prevent collaboration on a large scale. This necessitates the development of efficient methods for decentralized learning while preserving privacy to generate wisdom whose quality is on par with the case of data centralization. It involves first identifying and creating essential building blocks that encourage collaboration while preserving the decentralized nature of these critical digital paradigms. A key challenge here is to facilitate collaboration among distrustful, disconnected, and disincentivized entities possessing distinct assets such as data, models, and computation resources. Harnessing the collective wisdom latent within decentralized networks will unlock new avenues for innovation and human collaboration. Therefore, the primary aim of this thesis is to expedite AI adoption in decentralized systems by introducing novel algorithms and systems capable of extracting collective intelligence while preserving privacy. &#13;
&#13;
This thesis addresses the following research questions: First, it delves into methods for training machine learning models collaboratively while simultaneously protecting the privacy of raw data and the proprietary nature of individual models. Second, it explores the coordination mechanisms among system nodes in the absence of a central authority or trusted server to ensure orderly collaboration. Specifically, it answers questions like who should a node talk to. When does random collaboration selection work? Finally, it investigates strategies for conducting crowd-sourced decision-making to obtain population-level predictive results, scaling efficiently to encompass millions of agents.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156977</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Benchmarking Pavement Environmental Performance Using Data-Driven Modeling and Policy</title>
<link>https://hdl.handle.net/1721.1/156976</link>
<description>Benchmarking Pavement Environmental Performance Using Data-Driven Modeling and Policy
Vaidyanath, Varsha
Recently, federal and state governments have been implementing policy to reduce embodied emissions coming from the production of materials. However, pavement materials impact emissions throughout the pavement lifecycle, not just during production. This paper addresses how a new pavement evaluation system and policy framework might drive better solutions to reduce carbon emissions, from a climate change standpoint. The main components include: establishing why current pavement rating systems and current policy are not sufficient, performing a data-driven analysis with a grading and scorecard system to assess, compare, and summarizing pavement design quality, and proposing an effective policy framework to implement the system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156976</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Mechanical Behavior of a Traditional Japanese Joint for Flexible Structural Design</title>
<link>https://hdl.handle.net/1721.1/156975</link>
<description>Exploring the Mechanical Behavior of a Traditional Japanese Joint for Flexible Structural Design
Ortea Varela, Ines
This research examines the mechanical behavior of a traditional Japanese joint, the Mortised Rabbeted Oblique (MRO) splice. Through computational simulations employing Finite Element Analysis (FEA), the study examines a continuous beam and an unmodified MRO splice, revealing expected behavior in the beam and unexpected tress concentration and displacement asymmetry in the splice. Topology optimization of the splice’s end sections yields iterations with varying volume reductions (50%, 70%, and 90%), showing significant topology differences between the two ends. Subsequently, all iterations were fabricated through 3D printing using PLA and subjected to three-point bending testing. Experimental results confirm the computational findings, demonstrating reduced strength in the MRO splice compared to the continuous beam. A surprising increase in ductility and maximum load resisted by the iterations with 50% and 70% volume reductions is observed. This finding underscores how modifying the end beams significantly influences the overall behavior of the splice mechanism.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156975</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Fairness of Artificial Intelligence Models for Radiology Image Classification</title>
<link>https://hdl.handle.net/1721.1/156974</link>
<description>Evaluating Fairness of Artificial Intelligence Models for Radiology Image Classification
Sandadi, Varsha
With the increasing prevalence of AI-assisted decision-making in the healthcare domain, evaluating fairness of machine learning models is more central than ever. Measuring the fairness of medical decision-support systems has enormous impacts on patients of different backgrounds and can influence how clinicians make decisions. In this study, we conduct a fairness analysis on the top 8-10 performing machine learning and artificial intelligence models from the Radiological Society of North America cervical spine fracture detection challenge and abdominal trauma detection challenge. Seven metrics are used for a more comprehensive assessment on fairness. Our findings indicate that cervical spine fracture detection models exhibit overall fairness, while abdominal trauma detection models demonstrate some unfairness in specific injury regions, possibly due to limited sample size. We also explore the performance of top models from the intracranial hemorrhage detection challenge across clinician-labeled "easy," "medium," and "hard" cases, revealing a lower accuracy rate on hard cases. This study underscores the need for additional model testing and comprehensive data representation to ensure fairness before real-world deployment in healthcare systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156974</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling spatial mapping, memory and their underlying mechanisms in the hippocampal complex</title>
<link>https://hdl.handle.net/1721.1/156973</link>
<description>Modeling spatial mapping, memory and their underlying mechanisms in the hippocampal complex
Sharma, Sugandha
Humans form mental representations of the space and environment around them. This ability is fundamental to tasks such as navigation, spatial reasoning, and understanding the relationships between objects in the environment. Spatial mapping in humans involves several cognitive processes, including perception, memory, and spatial reasoning. Memory plays a crucial role in spatial mapping. As individuals move through an environment, they encode and store information about the spatial layout, which they can later recall to navigate or perform tasks. Further, spatial memory involves similar brain regions as those implicated in sequential episodic memories. Research on human spatial mapping has greatly advanced our understanding of how humans form these mental representations, but leaves us some ways from a complete understanding. In particular, it has been difficult to understand what makes human spatial representations generalizable enabling few-shot learning of maps of novel spaces, how humans store the vast amount of spatial information (maps) experienced through their lifetimes, and what is the connection between spatial memory and episodic memory in the brain, and why is it significant? In this thesis, I aim to answer these questions. First, I ask whether hierarchical spatial representations form the basis of generalizable spatial representations leading to efficient exploration of novel spaces. I present a Map Induction framework that uses a compositional hierarchy to represent spaces, and present results on its utility for exploring novel spaces. Second, I ask how humans store the vast amount of information (e.g., compositional map primitives required to form hierarchical spatial representations) experienced through their lifetimes. I present a neural model called MESH (motivated by brain’s entorhinal-hippocampal system), that has an exponential capacity and shows a gradual decay in retrieval quality with an increase in the number of stored memories rather than a catastrophic drop. Third, I present Vector-HaSH, a model of the entorhinal-hippocampal circuit that forms an instance of MESH, preserving all its properties. This model unifies-general associative memory, spatial memory and episodic memory providing a computational hypothesis for the unification of spatial and episodic memory roles of the hippocampal complex. Overall this research bridges the computational, algorithmic and implementation levels of analyses for explaining how humans represent and reason about spaces.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156973</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Sim-to-Real Robot Parkour from RGB Images</title>
<link>https://hdl.handle.net/1721.1/156972</link>
<description>Learning Sim-to-Real Robot Parkour from RGB Images
Jenkins, Andrew
Advancements in quadrupedal robot locomotion have yielded impressive results, achieving dynamic maneuvers like climbing, ducking, and jumping. These successes are largely attributed to depth-based visual locomotion policies, known for their robust transferability between simulated and real-world environments (sim-to-real). However, depth information inherently lacks the semantic information present in RGB images. This thesis investigates the application of an RGB visual locomotion policy for navigating complex environments, specifically focusing on extreme parkour terrain. While RGB data offers a deeper understanding of the scene through semantic cues, it presents challenges in sim-to-real transfer due to large domain gaps. This work proposes a novel approach for training an RGB parkour policy and demonstrates that it achieves performance comparable to depth-based approaches in simulation. Furthermore, we successfully deploy and evaluate our RGB policy on real-world parkour obstacles, signifying its potential for practical applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156972</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Multi-Stage Machine Learning Pipelines for Extracting Structured Key-Value Pairs from Documents</title>
<link>https://hdl.handle.net/1721.1/156971</link>
<description>Leveraging Multi-Stage Machine Learning Pipelines for Extracting Structured Key-Value Pairs from Documents
Pyo, Bryan
In the rapidly growing field of information extraction, the ability to automatically and accurately extract structured data from sources has grown in importance across several industries. This need has arisen largely due to the vast quantity of data that is currently available and still being actively collected by these industries for various purposes. In a world where data has grown greatly in quantity and importance, the ability to parse this data into usable information has grown to become an even more essential endeavor. Although information extraction has traditionally been a relatively labor-intensive task, with the rising sophistication and applicability of machine learning and computer-aided document analysis, automatic and more generalized methods of extracting relevant data from documents have become a major focus of research. This thesis discusses several pipelines that have been developed to extract data in the form of key-value pairs from specification sheets describing mechanical parts achieving accuracies ranging from 80% to 100% depending on the pipeline and the target documents and key-value pairs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156971</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Achieving Secure and Performant Databases with Minimal Resource Overhead</title>
<link>https://hdl.handle.net/1721.1/156970</link>
<description>Achieving Secure and Performant Databases with Minimal Resource Overhead
Lim, Darren
Modern cloud databases run in virtualized environments, which are typically implemented with Linux virtual machines (VMs). However, this poses two main risks. Typically, trusted database code is run alongside stored procedure code, which means that user-inputted stored procedure code can pose a security risk to the database and data itself, if the code contains vulnerabilities. Additionally, since Linux has such a large codebase, Linux-based VMs are subject to complex latency concerns and also a large attack surface. Using a low-level shared memory protocol, it is possible to create a secure and performant communication channel between a database VM and the VMs of its stored procedures. This protects the database from vulnerabilities in the stored procedure code. Furthermore, by using unikernels instead of Linux VMs, the machines running the VMs can minimize the CPU/memory overhead per VM while also improving security for the DMBS. Overall, these changes allow cloud-hosted machines to more efficiently utilize resources.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156970</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating Undercutting Attacks: A Study on Mining and Transaction Fee Behavior</title>
<link>https://hdl.handle.net/1721.1/156969</link>
<description>Mitigating Undercutting Attacks: A Study on Mining and Transaction Fee Behavior
Bao, Claire
With block rewards dwindling in Bitcoin, a miner’s revenue will become increasingly reliant on transaction fees. However, these transaction fees are highly variable, which could result in undercutting attacks occurring. Undercutting attacks are when miners intentionally fork the blockchain in an attempt to steal transactions from an already-mined block. These attacks could cause repeated forking of the blockchain, thereby rendering Bitcoin unstable and less secure long-term. The original paper by Carlsten et. al. proposing these attacks made assumptions about the future mining environment. For instance, they assumed that block size limits were large relative to the number of transactions and that all transactions had the same fee. &#13;
&#13;
This thesis aims to examine whether undercutting attacks would still be a threat under different mining dynamics. Specifically, we examine two important mempool characteristics that have changed since the original paper was written: the block size limit and the fee gradient. By investigating what happens as these characteristics and factors change, our research is able to not only generate a holistic view of whether undercutting attacks are a threat for a wide variety of possible mempool dynamics, but it also provides guidelines on what range each of these measurable characteristics must fall within in order for the blockchain to be secure and stable long-term. Our research found that the blockchain is safe from undercutting attacks when the block size limit is small relative to the number of transactions, but the blockchain becomes more susceptible to undercutting attacks if transactions with much higher fees enter the mempool infrequently even for smaller block size limits. Moreover, we extend the logic of undercutting attacks from the original paper to show that, if the mempool dynamics are such that the undercutting occurs long-term, the tangible impact on users is that very little progress will be made as fully rational miners will end up only including one transaction per block, regardless of the total amount of available transactions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156969</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Long-range Genomics Benchmark Technology and More</title>
<link>https://hdl.handle.net/1721.1/156968</link>
<description>Long-range Genomics Benchmark Technology and More
Polen, McKinley
The transformer architecture has emerged as a popular choice in various domains, owing to its ability to capture long-range dependencies and parallel processing capabilities. In the context of genomics, where dependencies often span over 100,000 base pairs, the quadratic computational complexity of the attention mechanism, a core feature of the transformer architecture, poses a significant bottleneck. With the goal of creating a genomics foundation model (FM), this paper aims to address challenges associated long range dependencies in genomics. Our survey encompasses modifications to the attention mechanism, the creation of a genomics long range benchmark (GLRB), and the evaluation of various transformer and other non-transformer architectures. These efforts collectively develop the groundwork supporting the development of a robust genomics foundation model, opening new possibilities for genomics research and applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156968</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Modal Protein Function Prediction using a Joint Embedding Space from Two Graph Neural Networks</title>
<link>https://hdl.handle.net/1721.1/156967</link>
<description>Multi-Modal Protein Function Prediction using a Joint Embedding Space from Two Graph Neural Networks
Tysinger, Emma P.
In bioinformatics and proteomics, determining protein functions experimentally is expensive and slow. There’s a growing need for precise and quick computational prediction methods, filling the gap between sequence discovery and functional understanding. Over recent years there has been an influx of deep-learning protein folding algorithms used for predicting function by transfer learning. Protein function is only partially captured by each of a large number of modalities including structure, however, in isolation they only give us a partial understanding of function. Uniting these is an important step to understanding function more holistically. We present a multi-modal framework using two graph neural networks to infer a joint embedding space that captures many properties of a protein including structure, disease associations, drug interactions, protein interactions, biological processes and more. We evaluate the embedding space on downstream prediction tasks including enzyme commission (EC) numbers and gene ontology (GO) terms. Experimental results on protein function prediction, as well as a qualitative visual analysis of the protein embedding space show that our framework is able to successfully capture both structure and biomedical context of proteins, and outperforms structure-only based encoders.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156967</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inertial Navigation System Drift Reduction Using Scientific Machine Learning</title>
<link>https://hdl.handle.net/1721.1/156966</link>
<description>Inertial Navigation System Drift Reduction Using Scientific Machine Learning
McManus, Matthew
Inertial Navigation Systems (INS) are crucial for accurate navigation in GPS-denied environments, but they suffer from drift errors that accumulate over time. This thesis introduces Scientific Machine Learning (SciML) as an innovative approach to mitigate INS drift by integrating physical models with machine learning algorithms. The proposed SciML architecture leverages neural networks to learn complex error patterns and relationships from simulated IMU data, outperforming conventional techniques like Kalman filtering. Utilizing a simulation-focused approach with the Julia programming language and the HighPerformance Inertial Navigation Development Repository (HIDR) library, the research generates realistic datasets encompassing diverse trajectories, sensor errors, and operational conditions. The SciML methodology incorporates data generation, INS mechanization, error modeling using neural networks, and a filtering framework that integrates the Extended Kalman Filter (EKF) with batch filtering techniques. Experimental results demonstrate the superior performance of the SciML-based INS in reducing position, velocity, and attitude errors compared to a baseline Kalman filter. This pioneering approach of fusing SciML with INS physical models holds promise for revolutionizing drift error mitigation and advancing the field of navigation systems, paving the way for more accurate, reliable, and resilient navigation in GPS-denied environments, with potential applications in aviation, robotics, and autonomous vehicles.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156966</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI Interfaces for Augmenting Episodic Memory</title>
<link>https://hdl.handle.net/1721.1/156965</link>
<description>AI Interfaces for Augmenting Episodic Memory
Zulfikar, Wazeer Deen
Episodic memory, the memory of personal experiences, is a core component of human cognition. It functions within the neural substrate to store progress towards personal goals. Thus, it influences human behavior by enriching social interactions, forming a personal narrative, and facilitating personal growth. With the rise of challenges such as poor sleep, aging and dementia, and fragmented attention, people experience difficulties with episodic memory retrieval. These difficulties range from momentary lapses such as forgetting previous interactions during conversations, to recalling multiple events during reminiscing and decision-making. &#13;
&#13;
In this work, we explore artificially intelligent (AI) systems that augment episodic memory by enabling people to interact with their memories effectively. We design, develop, and evaluate two systems: (i) Memoro, a wearable audio-based memory assistant that presents concise suggestions in real-time while minimizing disruption to the user’s primary task, and (ii) Resonance, a web-based reflective memory assistant that offers actionable suggestions to help users savor their past, present, and future experiences for mental health benefits. By conducting an in-person user study for Memoro and a longitudinal online user study for Resonance, we investigate the effects of these systems on users, measure their technical efficacy, and gather feedback on user experiences.  Recent advances in artificial intelligence offer novel opportunities to enhance episodic memory. Therefore, exploring interfaces that seamlessly integrate with human behavior is crucial to ensure that AI-based systems enrich everyday experiences.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156965</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embedding engineering intuition into computational design through interactive topology optimization</title>
<link>https://hdl.handle.net/1721.1/156964</link>
<description>Embedding engineering intuition into computational design through interactive topology optimization
Schiffer, Gillian
With increasing pressure to generate low environmental impact designs, topology optimization presents a flexible, material efficient solution. Topology optimization is a computational design method that produces lightweight, high performing designs uniquely suited to a user’s objective function and constraints. However, there exist major obstacles to topology optimization’s widespread use, including increased complexity and computational time for advanced, nonlinear optimization formulations such as buckling or stress, lack of geometric control, and difficulty manufacturing. Interactive topology optimization algorithms overcome these obstacles by prompting users to directly modify the geometry of the design as the optimization runs. By embedding their engineering intuition into the design, users address concerns for complex failure modes, manufacturability, or alternative engineering performance metrics. This work presents two interactive approaches: HiTop 2.0 which empowers users to selectively enforce minimum and/or maximum solid and/or void feature size controls, and interactive infill topology optimization which incorporates user drawn infill patterns into regions of the optimized design. The interactive methods are demonstrated on numerical 2D examples, HiTop 2.0 is extended to a numerical 3D example, and interactive infill is experimentally validated with 2.5D additively manufactured test beams.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156964</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Electricity Distribution Network Tariffs for Beneficial Electrification</title>
<link>https://hdl.handle.net/1721.1/156963</link>
<description>Designing Electricity Distribution Network Tariffs for Beneficial Electrification
Turk, Graham
Decarbonizing the transportation and residential building sectors will require rapid electrification through the uptake of electric vehicles (EVs) and cold climate heat pumps (CCHPs), respectively. There is broad consensus that the flat volumetric electricity tariffs currently in place for residential customers in most of the US discourage electrification and do not reflect the underlying marginal costs of electricity delivery. Under flat volumetric tariffs, utilities are projecting sharp rises in distribution-level peak demand, which will necessitate network upgrades whose costs are recovered from all grid users. Alternative rate designs can help mitigate the need for these upgrades by shifting new demand away from peak periods. However, there is an emerging narrative that electricity tariff design is a zero-sum game: regulators can either protect vulnerable households or encourage electrification, but not both. In this thesis, we challenge that perception by asking whether well-designed distribution network tariffs can deliver a win-win in the long run, reducing operating costs for EVs and/or CCHPs and average network costs for households that cannot yet afford to electrify. We answer this question by running a series of bottom-up optimizations to simulate household’s responses to alternative network tariff designs in two distinct geographies, then assessing cost impacts on different household groups. We use open-source data on household electricity consumption and travel behavior. We find that beyond very low adoption levels, time-of-use per-kWh network tariffs, which several states have adopted as the default, perform poorly on all metrics and lead to large increases in local peak demand. Per-kW capacity tariffs (subscription and demand charges) are effective at mitigating EV-driven peaks, especially when paired with TOU energy tariffs. We recommend that regulators separate network charges from energy charges and introduce a per-kW subscription network tariff to collect a portion of the network revenue requirement. This approach will reduce the total cost of ownership of electrified devices while mitigating the network upgrades needed to maintain reliability. Our recommendations offer a path towards rapid electrification that benefits all grid users.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156963</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Misalignment in Language Model Deployments through Context-Specific Evaluations</title>
<link>https://hdl.handle.net/1721.1/156962</link>
<description>Addressing Misalignment in Language Model Deployments through Context-Specific Evaluations
Soni, Prajna
Language model-based applications are increasingly being deployed in the real world across a variety of contexts. While their rapid success has realized benefits for society, ensuring that they are trained to perform according to societal values and expectations is imperative given their potential to shape societal values, norms, and power dynamics. Evaluation plays a key role in language model (LM) alignment and policy-making. Presently, LM alignment and evaluations are based on developer- and researcher-prescribed attributes, with many benchmarks focusing on performance as dictated by generalized or primarily Western datasets that may not accurately reflect the deployment context. This results in an inevitable misalignment where a model trained on human preference proxies in context A is deployed in context B. &#13;
&#13;
Existing evaluation measures and alignment techniques are heavily biased towards the values and perspectives of model developers. In this thesis, I argue that in order to ensure that alignment efforts are specific to their deployment contexts, it is necessary and feasible to design open-ended and participatory methods to elicit a broader range of context-specific axes. I demonstrate the viability of this through, CALMA, a non-prescriptive and grounded participatory process that successfully elicits distinct and context-specific alignment axes for evaluation datasets through in-context studies with two different communities. I further explore the ways in which broader participation can enable more effective adaptive AI regulation due to the crucial role of evaluations in addressing the technology-policy lag.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156962</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Garabit Viaduct: A Historical and Structural Study</title>
<link>https://hdl.handle.net/1721.1/156961</link>
<description>The Garabit Viaduct: A Historical and Structural Study
Harlin, Anne-Sixtine
This thesis investigates the Garabit Viaduct, providing a historical study and structural analysis of its truss arch. It aims to unravel the ingenuity behind the arch's elegant shape and design process. By examining historical plans and the memoirs of engineers such as Gustave Eiffel and Léon Boyer, this research uncovers the evolution of the viaduct's design and shape, revealing that the geometry of the arch was form-found using graphic statics. This study sheds light on the structural design hypotheses employed by Gustave Eiffel and Maurice Koechlin in sizing the members, providing insights into design practices of the late 19th century. Additionally, the study of the primary source documents left behind by the engineers suggests the method used for the arch's design may have influenced the shaping of the supporting piers, opening avenues for future research into the broader implications for Eiffel's later iconic tower.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156961</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Peripheral Nervous System Modulation with Wireless Cellular Sized Freestanding Injectable Devices</title>
<link>https://hdl.handle.net/1721.1/156960</link>
<description>Peripheral Nervous System Modulation with Wireless Cellular Sized Freestanding Injectable Devices
Patel, Preet
Designing novel neural interfaces is essential for various medical applications, scientific research, and human augmentation. One of the foundations of neural interface and bioelectronic medicine is the electrical stimulation of excitable cells, to interface the body with electronics and treat a variety of diseases. Current technologies, while efficacious, are limited by their bulkiness, require highly invasive surgeries, are unable to target at single-cell level resolution and are prone to foreign body reactions. Optogenetics can address these issues but fundamentally requires genetic modifications which makes it difficult to implement in-vivo and has issues of muscle atrophy and toxicity specifically in the Peripheral Nervous System (PNS).&#13;
&#13;
This work aims to advance bioelectronic medicine by developing efficient, wireless, cellular- sized electronic devices that can be administered in a drug-like fashion. These innovative, substrate-free nanoelectronic devices, termed injectable electronics, can be activated, and controlled using near-infrared (NIR) light, enabling minimally invasive, targeted neuromodulation deep within the peripheral nervous system (PNS). By overcoming the limitations of current implantable devices, this groundbreaking approach has the potential to transform the way we diagnose and treat a wide range of neurological disorders.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156960</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Prime Factorization of Proteins</title>
<link>https://hdl.handle.net/1721.1/156959</link>
<description>Towards a Prime Factorization of Proteins
Radev, Simeon
A classical problem of machine learning is the interpretability of a model’s latent information processing. This is particularly the case in the richly complex field of protein analysis, whereby unique and novel insights into the structural organization of proteins can help illuminate their functional space, and in particular lead toward a factorization of the structural space into a set of motif building blocks, which completely span this universe. This thesis creates a new inference interface for performing such analysis, by leveraging the sequential learning process of a neural autoencoder to construct a decomposition of proteins as a hierarchical sequence of embedded representation vectors. The further development of this work could lead to a greater understanding of the organizational complexity of natural phenomena, and in particular, as it relates to the uniquely complex relationship between protein structures and their function.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156959</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of Deep Learning Algorithms in Predicting Seismic Response of a Reinforced Concrete Structure</title>
<link>https://hdl.handle.net/1721.1/156958</link>
<description>Evaluation of Deep Learning Algorithms in Predicting Seismic Response of a Reinforced Concrete Structure
Morgan, Jacob A.
This thesis presents an evaluation of the performance of three well-established deep learning algorithms in predicting the response of a six-story instrumented reinforced concrete hotel in California to seismic excitation. Given the increasing availability of strong-motion data and expanded usage of deep learning in structural health monitoring, this thesis seeks to evaluate the predictions of purely data-driven and physics-informed architectures using processed instrumentation data in order to more accurately predict structural response for use in structural health monitoring and performance-based design applications.&#13;
&#13;
By employing a variety of results metrics previously used in the literature, including correlation coefficients, normalized error distributions, and peak errors, this thesis examines different components of the models’ capabilities to learn more about patterns in the data learned by the computational mechanisms of each architecture, and exploring the feasibility of a generalized approach for further application in structural response prediction. &#13;
&#13;
Findings from the work show the data-driven Long Short-Term Memory (LSTM) network performing the most accurately, but not consistently outperforming the other algorithms. Some trends in the data could be evidence of how different architectures may be better equipped in predicting different mode shapes and frequency contents. For example, the data-driven and physics-guided LSTM models predicted the third floor’s response more accurately than the roof, whereas the physics-guided convolutional neural network (CNN) was the opposite, showing a contrast between the two base architectures. This thesis also contributes to this growing field by documenting the experimental setup in detail to allow for the replication of results and for the facilitation of future application by structural engineers.&#13;
&#13;
As structural engineering research in deep learning continues to gain popularity, this thesis provides an experimental basis of a case study that can be followed and replicated to motivate future experimentation, as well as offering compelling different directions that future work could be directed to further the usage of deep learning in structural response prediction and structural health monitoring as a whole.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156958</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a Psychometric Tool to Measure the Emotional Impact of Visual Content</title>
<link>https://hdl.handle.net/1721.1/156957</link>
<description>Developing a Psychometric Tool to Measure the Emotional Impact of Visual Content
Cucu, Theodor
This thesis investigates the human valence response to sequences of visual images. We f irst use crowd-sourcing and a novel nine-point psychometric scale to estimate human valence responses to individual images from the OASIS image set with high reliability (split-half Spearman rank-correlation ρ = 0.95). In a separate group of human participants, we then estimate valence responses following short, random sequences of those images (of length ≤ 10). Our key finding is that these sequence-contingent valence responses can be closely predicted by a simple linear combination of the estimated human valence responses to individual images (held-out ρ = 0.94). The combination weights are largest for the final image in the sequence; intuitively, this means the final image by itself can make predictions with high goodness-of-fit (ρ = 0.87). In summary, this research shows new evidence for a simple relationship between valence responses to individual images and valence responses to image sequences, with implications for future studies and practical applications in psychological assessment and beyond.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156957</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-omic Analysis of Neurodegeneration in Alzheimer’s Disease and Related Dementias</title>
<link>https://hdl.handle.net/1721.1/156956</link>
<description>Multi-omic Analysis of Neurodegeneration in Alzheimer’s Disease and Related Dementias
Howe, Stephanie Pui-kay
The advent of single cell sequencing has revolutionized the granularity at which we can understand genetics and underlying cell biology. This enables us to analyze both the transcriptome and epigenome of various tissues, offering new insights into the molecular mechanisms that underlie disease such as neurodegeneration. This study focuses on neurodegenerative disease at the single cell resolution of the following proteinopathies: Alzheimer’s Disease (AD), Frontotemporal Dementia (FTD), Lewy Body Dementia (LBD), and Vascular Contributions to Cognitive Impairment and Dementia (VCID). We utilize both single-cell RNA sequencing (scRNA-seq) and single-cell ATAC sequencing (scATAC-seq) to perform a joint analysis of these conditions, examining both modalities holistically. Our research characterizes a multi-omic data set comprising 2,820,565 cells from 491 samples of prefrontal cortex across the aforementioned conditions, with all samples subjected to scRNA-seq and 63 to scATAC-seq. Leveraging this data, we conduct a multi-omic analysis of Alzheimer’s Disease and Related Dementias (ADRD) by exploring differences in the transcriptome and epigenomic erosion profile across conditions, shedding light on the intricacies of cortical aging. Ultimately, we identify potential molecular and genetic markers that drive the heterogeneous relationship between pathology, epigenetic erosion, and cognition in individuals affected by these conditions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156956</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Cortex-Hippocampus Interactions During Language Processing</title>
<link>https://hdl.handle.net/1721.1/156955</link>
<description>Characterizing Cortex-Hippocampus Interactions During Language Processing
Lee, Jiachen Elizabeth
The role of medial temporal lobe structures, including that of the hippocampus, in language processing, remain largely unknown. In patients with hippocampal damage, language is left largely intact [Vargha-Khadem et al., 1997], suggesting that the hippocampus is likely not necessary for language processing. Recent evidence, however, has shown that the hippocampus may serve functions outside its traditional roles in episodic memory and spatial navigation, and may generally aid in the encoding of relationships across time and space [Cohen and Eichenbaum, 1993]. Hence, the hippocampus may be involved in processes that are also implicated for language processing. Indeed, some patients with hippocampal damage, show deficits in resolving ambiguous discourse referents [Rubin et al., 2011] [Duff et al., 2011], reconstructing narratives [Race et al., 2011a], and display limited linguistic flexibility in engaging "verbal play" [Duff et al., 2009]. Here we leverage a large-scale fMRI dataset (n=790) and identify a region that responds to meaningful language in the anterior portion of the left hippocampus. We then characterize its response profile and show that it is responsive to semantically meaningful material but is not engaged during cognitively demanding spatial working memory and arithmetic tasks. Next, we examine the relationship between hippocampal and cortical language processing, starting with the neural correlates of word- and sentence- memorability in both the hippocampal and cortical language areas. Lastly, we leverage an encoding-model-guided procedure to search through a large set of sentences to identify those that are predicted to maximally differentiate responses in the cortical and hippocampal language areas. We find that cortical language areas are largely driven by surprisal, while hippocampal language areas display preferences towards more imageable and concrete sentences.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156955</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accurate and Fast Approximate Graph Mining at Scale</title>
<link>https://hdl.handle.net/1721.1/156954</link>
<description>Accurate and Fast Approximate Graph Mining at Scale
Arpaci-Dusseau, Anna
Approximate graph pattern mining (A-GPM) is an important data analysis tool for numerous graph-based applications. There exist sampling-based A-GPM systems to provide automation and generalization over a wide variety of use cases. Despite improved usability, there are two major obstacles that prevent existing A-GPM systems being adopted in practice. First, the termination mechanism that decides when to terminate sampling lacks theoretical backup on confidence, and performs significantly unstable and thus slow in practice. Second, they particularly suffer poor performance when dealing with the “needle-in-the-hay” cases, because a huge number of samples are required to converge, given the extremely low hit rate of their lazy-pruning strategy and fixed sampling schemes. We build ScaleGPM, an accurate and fast A-GPM system that removes the two obstacles. First, we propose a novel on-the-fly convergence detection mechanism to achieve stable termination and provide theoretical guarantee on the confidence, with negligible online overhead. Second, we propose two techniques to deal with the “needle-in-the-hay” problem, eager-verify and hybrid sampling. Our eager-verify method drastically improves sampling hit rate by pruning unpromising candidates as early as possible. Hybrid sampling further improves performance by automatically choosing the better scheme between fine-grained and coarse-grained sampling schemes. Experiments show that our online convergence detection mechanism can precisely detect convergence, and results in stable and rapid termination with theoretically guaranteed confidence. We also show the effectiveness of eager-verify in improving the hit rate, and the scheme-selection mechanism in correctly choosing the better scheme for various cases. Overall, ScaleGPM achieves an geomean average of 565× (up to 610,169×) speedup over the state-of-the-art A-GPM system, Arya. ScaleGPM is also four orders of magnitude faster than state-of-the-art exact GPM system, GraphZero. In particular, ScaleGPM handles billion-scale graphs in seconds, where existing systems either run out of memory or fail to complete in hours.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156954</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation on ImageNet Remaining Errors with TRAK</title>
<link>https://hdl.handle.net/1721.1/156953</link>
<description>Investigation on ImageNet Remaining Errors with TRAK
Ma, Lingyi
The Imagenet dataset is an important benchmark and test bed for computer vision models. Two of its most important characteristics are the size and difficulty, which were what motivated the breakthrough deep learning model Alexnet a decade ago. As researches progress and computation power grows, the best models nowadays can achieve accuracy as high as 90% on Imagenet. With such high accuracy, model predictions are usually of high precision and the causes of this long tail of error are unknown. Many studies have suggested that reassessing Imagenet, a nontrivial amount of label error and noise is found and effort had been made to fix this label noise in the test set, mainly through manual review. However, not many studies have dived into fixing labels for the training set, largely due to its large scale. The proposed thesis aims to understand the remaining errors that models are still making on the ImageNet dataset and investigate the labeling problems in the Imagenet training set, utilizing TRAK- a recently developed efficient data attribution method to help identify problematic images among the 1.4 million images in Imagenet training set.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156953</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tumor cell-intrinsic signals promoting tolerance and adaptation to oncogenic kinase inhibition</title>
<link>https://hdl.handle.net/1721.1/156951</link>
<description>Tumor cell-intrinsic signals promoting tolerance and adaptation to oncogenic kinase inhibition
Flower, Cameron Timothy
Therapeutics targeting oncogenic kinases have offered longer survival and superior quality of life for cancer patients with particular malignancies compared to the preceding standard of care. However, many patients still fail to show a clinically meaningful response to kinase inhibitors prescribed on the basis of tumor genotype, and nearly all responsive patients eventually develop resistance, limiting the curative potential of these agents. A more complete understanding of the molecular basis underlying therapy failure is required for designing new agents and combinations with improved response rates. In this thesis, I explore these issues using tractable experimental models in which genotype-matched kinase inhibitors fail to kill or durably arrest proliferation of cancer cells, with particular focus on the role of cellular signaling networks.&#13;
In the first part, I have characterized a panel of human lung cancer cell lines harboring genetic gain-of-function alterations of clinically actionable tyrosine kinases (TKs). Using commonly prescribed TK inhibitors (TKIs), I show that TK genetic status generally predicts whether or not a cell line will show any response to genotype-matched TKI (GM-TKI), but is insufficient to predict drug tolerance, the ability of a cell line to sustain proliferation under drug. In drug combination experiments targeting co-mutated pathways, I show that some degree of tolerance to GM-TKI is explained by oncogenic co-mutations, but not across all lines. By leveraging targeted and untargeted mass spectrometry (MS) of endogenous tyrosine-phosphorylated proteins, which enables phosphosite-specific quantification of TK signaling networks, I report several cell line-specific vulnerabilities not predicted to exist at the genetic level, and the consensus observation that sustained activity of SRC family kinases (SFKs), or of the SRC-like kinases ABL1/2, is an important contributor to GM-TKI tolerance in all lines.&#13;
In the second part, I have examined the molecular events underlying drug-induced adaptation, the process by which drug exposure inadvertently drives upregulation of pro-survival signaling pathways. In a collaborative effort, we report the signaling and transcriptional dynamics underlying early adaptation to oncogenic BRAF inhibition in a patient-derived cell line model of human BRAF-mutant melanoma. We show by time-resolved MS of mitogenic signaling networks, computationally integrated with matched mRNA sequencing data, that adaptation to BRAF inhibition in our model system is promoted by early drug-induced compensatory SFK signaling, due in part to accumulation of reactive oxygen species via an impaired NRF2 antioxidant response. This concerted adaptive response promotes sensitivity to SFK inhibition across a panel of patient-derived BRAF-mutant melanoma cell lines and in a mouse xenograft model. The work described in both parts was aided by two MS software solutions I developed: one to automate the generation of targeted acquisition methods for protein phosphosites and pathways of interest, and the other to retain quantitative information from fragment ion spectra with missing values.&#13;
Together, this thesis reports new connections between cell signaling and kinase inhibitor response, and offers the intriguing hypothesis that SFK signaling may be a conserved barrier for maximally effective targeted cancer therapy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156951</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stakeholder views on the uptake of sustainable and responsible nickel mining and processing supply chains for electric vehicles in Indonesia</title>
<link>https://hdl.handle.net/1721.1/156950</link>
<description>Stakeholder views on the uptake of sustainable and responsible nickel mining and processing supply chains for electric vehicles in Indonesia
Malik, Rameen Hayat
This thesis explores the evolution and contemporary challenges of Indonesia’s nickel industry within the context of the electric vehicle (EV) supply chain. It critically examines the sustainability and ethical considerations as Indonesia positions itself as a key player in the global transition to clean energy. The study provides a comprehensive analysis of Indonesia’s strategic moves to enhance the value derived from its extensive nickel reserves, underscored by the implementation of policies such as the raw export ban aimed at fostering local processing industries. Central to this examination is the dual role of nickel as both a critical and contentious resource, reflecting on its classification as a critical mineral by multiple countries due to its indispensability in EV battery production and the substantial environmental and social challenges associated with its extraction and processing. Employing a policy mobility framework, this thesis navigates the trans-local dynamics of policy making in Indonesia, juxtaposing these with global economy wide pursuits of transportation decarbonization via the EV industry. Through a mixed-methods approach, combining literature review, stakeholder interviews, and field observations, the research unveils the multifaceted perspectives of various stakeholders including industrial entities, government bodies, and civil society organizations. The findings highlight the significant influence of international investment, mainly Chinese investment in shaping Indonesia’s nickel processing capabilities, while also noting the ethical dilemmas and environmental hazards posed by the industry’s expansion. Indonesia’s strategy to escalate value addition locally is critically assessed, revealing both progress and persistent ethical and environmental challenges. Strategies are proposed to leverage the myriad of resources, influence and authority of actors along the EV supply chain to spur the growth of sustainable and responsible supply of Indonesian nickel. The thesis contributes to the discourse on sustainable mineral supply chains by proposing policy recommendations aimed at reconciling economic ambitions with environmental and social imperatives. These recommendations advocate for enhanced governance structures, transparent supply chains, and international collaboration to achieve ethical sourcing practices. The research underscores the need for a balanced approach that not only caters to the economic aspirations of resource-rich nations but also adheres to global sustainability standards.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156950</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Impact of the Inflation Reduction Act and Hydrogen Storage in Salt Caverns in the Mid-Atlantic United States</title>
<link>https://hdl.handle.net/1721.1/156949</link>
<description>Modeling the Impact of the Inflation Reduction Act and Hydrogen Storage in Salt Caverns in the Mid-Atlantic United States
Armstrong, Les Gabriel
Hydrogen is widely understood to be critical for decarbonizing hard-to-abate sectors like heavy industry, long-distance transportation, as well as balancing a variable renewable energy dominated power grid.&#13;
In this thesis, we first propose a methodology for evaluating the potential for hydrogen storage in geological salt resources. Our results show that the Michigan and Appalachian Salina basins are promising locations for hydrogen storage in salt caverns. After applying a coarse techno-economic filter, the storage potential of the remaining high value caverns is 9.7 × 108 metric tons of H2 or 32.4 PWh in Michigan and 1.6 × 107 metric tons of H2 or 0.54 PWh in the Appalachian region.&#13;
We then perform a techno-economic analysis on these salt cavern resources which we utilize as hydrogen storage options in Macro, an open source energy system optimization model that couples the power, hydrogen, and carbon sectors. We then analyze the impact of the Inflation Reduction Act and the presence of salt caverns on the United States Mid- Atlantic region in the year 2035. We find that salt caverns do not have a significant impact on the overall coupled energy system dynamics unless we force a 100% decarbonization constraint. In addition, we also uncovered a perverse behavior induced by the IRA’s hydrogen production tax credit within the model. Further work is required to understand whether this behavior is likely in practice or can be attributed to difficulties modeling real world interactions and internal frictions between actors in the energy sector.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156949</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of turbine motion on floating off shore wind turbine aerodynamics</title>
<link>https://hdl.handle.net/1721.1/156948</link>
<description>Effect of turbine motion on floating off shore wind turbine aerodynamics
Tignol, Bo Junior
The quest to meet renewable energy targets and anticipate future energy consumption growth has driven the continuous development of wind energy system design and its push for larger and more efficient wind turbines, especially in offshore environments. Floating Offshore Wind Turbines (FOWTs) are a promising alternative for capturing high wind energy potential in more difficult offshore environments that pose challenges to traditional bottom-fixed turbines. Yet, the understanding of FOWT behaviour under dynamic floating motion-induced translational and rotational degrees of freedom remains a significant and important challenge. Indeed, there is considerable inconsistency with regards to the interpretation of FOWT behaviour under floating motion. This thesis aims to evaluate the influence of surge and pitch motions on the aerodynamic behaviour of FOWT through the interpretation of several modeling approaches and their differences. Various surge and pitch amplitude and frequency ranges are considered, and two large eddy simulation (LES) approaches, along with a simplified analytical model, are assessed with regards to their predictions of the axial induction, induced velocity, power production, and wake velocities. It was found that there is generally close agreement between surging inflow and surging actuator disk LES simulations, with a difference in time-averaged power production no larger than 1.8% for any of the investigated cases, confirming the hypothesized similarity between these two methods to simulate turbines in kinematic motion. Furthermore, it was found that, although the simplified analytical model performed well at low frequency surge motions, it exhibited increasing underprediction of power production with increasing frequency. As for the pitch cases, the model exhibited low error compared to LES simulations across the amplitudes investigated. Moreover, unlike the variability in the surging data, the pitching LES exhibited less variations with surging frequencies, which suggests that the analytical model maintains better predictive capability across a diverse range of pitching motions. Looking forward, the results of this study suggest the need for continued in-depth evaluation of additional LES parameters such as the tip-speed ratio and thrust coefficient, along with validation with the development of an analytical model that can capture the observed frequency dependence. Finally, future work should also further hone in on the inclusion of LES at different freestream wind and surge and pitch combinations to explore the potential formation of complex wake states, as well as the investigation of in-sync and out-of-sync joint pitch-and-surge cases to explore the occurrence of any nonlinear aerodynamic interactions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156948</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Processes of Stratification Breakdown and Restratification in Antarctic Coastal Polynyas</title>
<link>https://hdl.handle.net/1721.1/156947</link>
<description>Processes of Stratification Breakdown and Restratification in Antarctic Coastal Polynyas
Xu, Yilang
Antarctic coastal polynyas are areas of persistent open water surrounded by sea ice. They are characterized by deep winter mixing due to dense water formation from sea ice production and elevated biological productivity after spring restratification. Antarctic coastal polynyas are diverse in terms of their mixing and stratification pattern, as well as the associated biological productivity. Here, we combine satellite and in situ observations, idealized numerical models, and analytical scaling to investigate the three-dimensional polynya circulation and explore the physical factors that control the winter destratification and springtime restratification in coastal polynyas. The high-resolution coupled model with ice shelf, sea ice, and ocean components qualitatively reproduces the observed coastal polynyas and sea ice fields, as evidenced by satellite measurements. In winter, strong offshore ocean currents driven by offshore katabatic winds carry some newly-formed dense water away from the polynya, weakening the destratification rate in the polynya water column. In contrast, coastal easterly winds induce onshore Ekman transport, constrain dense water outflows, and intensify vertical mixing. Moreover, an ice tongue and coastline geometry can modify sea ice and ocean circulations, thus influencing the dense water dispersal pathways and destratification in polynyas. In spring, offshore-originating sea ice meltwater primarily drives polynya restratification in the top 100 m of the water column. Even though ice shelf basal meltwater can ascend to the polynya surface, much of it is mixed over the upper 100–200 m and does not have a significant contribution to the near-surface restratification. The surface runoff from the ice shelf surface melt could potentially contribute significantly to the near-surface restratification, but its magnitude is less constrained with high uncertainty. This thesis provides a framework to study mixing and stratification dynamics in Antarctic coastal polynyas. It helps to explain their associated variabilities in dense water formation and biological productivity.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156947</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Modeling for Guiding the Transition to Low-Carbon Logistics</title>
<link>https://hdl.handle.net/1721.1/156946</link>
<description>Quantitative Modeling for Guiding the Transition to Low-Carbon Logistics
Lehmann, Jonas
We propose quantitative models to guide the transition to low-carbon logistics by adopting new vehicle technologies. Freight and logistics systems are key enablers of global economic growth, competitiveness, and access to markets and services. However, freight mobility is a significant and growing source of negative externalities, including greenhouse gas emissions, thus facing increasing pressure to decarbonize. This thesis addresses the sector’s inherent decarbonization complexities, offering decision-support tools and insights across three chapters to decouple freight activity from carbon emissions. &#13;
First, we investigate the operational requirements for leveraging low-emission delivery vehicles in a last-mile distribution system. Specifically, we provide exact and heuristic solution approaches to route goods through a two-echelon multi-modal last-mile distribution system with satellite facilities. These systems can enhance flexibility and agility in serving densely populated, congested urban areas while reducing negative externalities by employing various vehicle types suitable for specific urban environments.&#13;
Second, we study the tactical and strategic implementation of vehicle fleet transitions towards low-carbon technologies under emissions reduction targets. More specifically, we provide a multi-period combinatorial optimization decision-support tool that offers cost-optimal fleet replacement and utilization decisions given a set of decarbonization targets. A case study utilizing fleet and network data from a large U.S. consumer goods company underscores the importance of strategic planning and execution in fleet transitions to leverage network-wide cost benefits and minimize potential excess costs as first movers.&#13;
Third, we investigate the roles of energy choices and cost uncertainties in fleet asset decarbonization. We propose a stochastic programming model to account for uncertainty in fixed and variable costs and the associated risk of stranded assets given the dynamic developments of low-emission technologies. We find that incorporating cost uncertainty captures a broader range of future technology pathways, and dynamically adjusting fleet transition strategies may offer advantages over static, deterministic approaches.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156946</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Staphylococci of the Skin: Consequences for Host Health</title>
<link>https://hdl.handle.net/1721.1/156945</link>
<description>Staphylococci of the Skin: Consequences for Host Health
Khadka, Veda D.
The skin is the body’s largest barrier organ, and as such hosts roughly one million bacteria per square centimeter over its 1.8m2 surface area. As a barrier organ, the skin not only provides a physical layer of defense against these microbes but an immunological one as well. Immune cells present in deeper layers of skin are in constant dialogue with the microbes present on the surface, and these interactions have far-reaching consequences for host health. Here, I interrogate the dynamics of the skin microbiome and consequences of host-microbe interactions when the skin barrier is damaged. The skin as an external organ is subject to frequent stressors encountered in daily life, and can also be compromised due to genetic factors that weaken the barrier and predispose the host to inflammatory skin diseases. On healthy adults with an intact skin barrier, the skin microbiome is relatively diverse and stable. When the skin barrier is disrupted – either by daily stressors or genetic factors – the composition of the microbiome abruptly shifts to a less diverse state with an abundance of Staphylococci. Staphylococci have been shown to be important modulators of the host immune response and can improve host barrier repair from damage by wounding or parasitic infection during health. Much less is known about immune interactions with skin resident microbes like Staphylococci during barrier damage, however. In this work, I investigate the skin microbiome dynamics underlying a common inflammatory skin disease, atopic dermatitis (AD). During flares of AD, the pathogen Staphylococcus aureus rises to dominate the skin microbiome, and I show that relative abundance of S. aureus decreases in patients who are treated with a combination of conventional therapies and dilute bleach baths. Next, I use an animal model to interrogate how the host responds to skin resident microbes when the skin barrier is damaged. Although the protective effect of skin resident microbes like S. epidermidis during health have made members of the skin microbiome attractive targets for development into probiotic therapies, I show that common skin microbes ubiquitously delay skin barrier repair. Together, these works suggest a mechanism by which the skin microbiome can exacerbate disease during barrier damage, such as during AD, and describe the underlying dynamics of the skin microbiome during treatment for AD.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156945</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>PCBleed: Fuzzing for CPU Bugs Through Use of Performance Counters</title>
<link>https://hdl.handle.net/1721.1/156944</link>
<description>PCBleed: Fuzzing for CPU Bugs Through Use of Performance Counters
Muradyan, Natalie
In recent years, the increasing complexity of hardware designs has given rise to a growing array of vulnerabilities and security threats, as exemplified by instances such as Spectre, Microarchitectural Data Sampling, and Zenbleed. The inherent permanence of hardware vulnerabilities poses a significant threat, making early identification crucial for preventing security compromises once a device is manufactured. However, identifying hardware vulnerabilities is challenging due to the large and complex design of current CPUs, resulting in a substantial search space and numerous unknowns. This thesis proposes leveraging software fuzzing methods for hardware testing, focusing on the automated generation of instruction sequences that reveal hardware vulnerabilities. Unlike software fuzzing, hardware fuzzing faces challenges such as a lack of visibility into the microarchitectural processor states and difficulty in directing the search for test case generation. To address these challenges, this research draws inspiration from software fuzzers that use insights into the internal workings of the software for effective test case generation. We propose PCBleed, a coverage-guided mutational hardware fuzzer that enhances CPU fuzzing by using hardware performance counters as insight into the CPU’s behavior to improve test case generation. Since performance counters measure architectural events relevant to CPU performance, they provide insights that we use to estimate coverage, marking instruction sequences as novel. This approach aims to maximize the functionality exercised during hardware fuzzing, ultimately identifying interesting, bug-triggering behavior. Our methodology is distinctive, utilizing performance counters for hardware fuzzing enhancement, and aligns with recent research findings that highlight the versatility of performance counters in debugging, dynamic software profiling, CPU power modeling, malware detection, and cache side-channel attack detection. By incorporating performance counters into the hardware testing paradigm, this research seeks to contribute to the proactive fortification of hardware security through insightful analyses.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156944</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Image Recognition Difficulty in Artificial and Biological Visual Processing</title>
<link>https://hdl.handle.net/1721.1/156943</link>
<description>Characterizing Image Recognition Difficulty in Artificial and Biological Visual Processing
Cummings, Jesse E.
In recent years, computational models trained to do object recognition have become increasingly capable. Models have demonstrated significant improvements and have achieved saturated performance on many standard image classification benchmarks sparking discussion of whether these models have achieved parity with human object recognition ability and whether we can consider this problem solved. However, these models continue to fail in real-world applications and in un-human-like ways creating a disparity between the performance that benchmarks report and the performance that users experience. In this thesis, we investigate why standard datasets are misaligned with real-world performance by exploring image recognition difficulty as defined by human psychophysics. Using behavioral experiments with humans, we are able to identify images that humans struggle to recognize and investigate the prevalence of these images in datasets and their effect on model performance. To shed light on how humans are able to recognize these images, we conduct preliminary analysis with neuroimaging to take the first steps at identifying the neural signature of image difficulty.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156943</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Classification of Pharmaceutical and Biotechnology Companies</title>
<link>https://hdl.handle.net/1721.1/156942</link>
<description>Data-Driven Classification of Pharmaceutical and Biotechnology Companies
Xu, Angelina
This study presents a novel approach for classifying biopharmaceutical companies from 2000 to 2023. We use fundamental financial data, 10-K filings, and company drug development data to develop this new classification scheme. Return correlations are used to measure the similarity of companies within a cluster, and our analysis demonstrates that this data-driven improves upon industry standards. Additionally, we evaluate the risk-return characteristics of the clusters developed from this classification scheme as consideration for investment opportunities in these industries.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156942</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing an eCommerce Pricing Model Using Rank Centrality</title>
<link>https://hdl.handle.net/1721.1/156941</link>
<description>Developing an eCommerce Pricing Model Using Rank Centrality
Tong, Kevin C.
In recent years, eCommerce websites have become a popular alternative to traditional marketplaces, providing convenience to customers to order products from home and have them shipped. As a result, competition between sellers on the eCommerce websites has intensified in recent years, making a pricing strategy necessary to perform well in this marketplace.&#13;
&#13;
This paper attempts to model eCommerce competition between different sellers using the principle of Rank Centrality, and uses neural networks to accurately predict the winning seller on eCommerce websites, such as Amazon, based on factors including pricing, seller rating, and shipping guarantees for each seller. Using this prediction, a pricing strategy is formed to maximize sales volume and profits on these sites. This strategy is then implemented and evaluated as part of a 6-month internship with Spero Goods.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156941</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Health Centered Drug Policy: An Analysis of Past and Developing Drug Policy</title>
<link>https://hdl.handle.net/1721.1/156940</link>
<description>Towards Health Centered Drug Policy: An Analysis of Past and Developing Drug Policy
Lewis, Benjamin B.
Drug criminalization has disproportionately impacted communities of color and has insufficiently addressed substance use disorder and its associated risk of death through overdosing. Decriminalization has the potential to restore justice to communities decimated by traditional U.S. drug policy and could shift public focus towards medical approaches to treating addiction, however, inertia in drug policy persists, influenced by America’s popular political beliefs about illicit substances. A long-standing narrative in the United States views marijuana as a “gateway drug” that introduces users to harder substances, which then have adverse effects on their health and livelihood. As a result, many argue that policies which decriminalize marijuana are exacerbating the problem of drug addiction. Seemingly in line with this argument, overdose-related deaths–largely driven by increases in opioid consumption–have soared in recent years, and at the same time an increasing number of states have decriminalized marijuana. Little work, however, has examined the extent to which marijuana legalization has caused an increase in overdose deaths. Here, we address this question. To examine the causal effect of marijuana legalization on overdose deaths, we combine state-year level data on marijuana policy and overdose deaths with state-of-the-art techniques from the field of causal inference, namely Two-Way Fixed Effect Difference-in-Differences analysis with Synthetic Control. We include data from all states that enacted one of five marijuana legalization policies between 2010 and 2020. We estimate the causal effect of each policy separately for each state, and then use meta-analysis to calculate the overall effect of each policy intervention. We find that the passage of medical marijuana legalization laws, the opening of recreational dispensaries, and the implementation of Medical marijuana patient ID programs had no significant effect on annual state overdose death rates. The opening of medical marijuana dispensaries and the passage of recreational marijuana legalization laws also had no significant overall effect on overdose death rates, but the effect of these policies varied significantly across states such that there were significant increases in some states and significant decreases in others. Overall, these findings contradict the popular claim that marijuana decriminalization leads to increased use of more dangerous drugs (and thus overdose deaths) in most cases – and more generally questions the characterization of marijuana as a gateway drug.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156940</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Securing the Future: Critical Materials Policies for the US Energy Transition</title>
<link>https://hdl.handle.net/1721.1/156939</link>
<description>Securing the Future: Critical Materials Policies for the US Energy Transition
Concordel, Adrien
As the U.S. pushes forward industrial policies to support its energy transition with policies like the Inflation Reduction Act (IRA) to develop domestic green-tech supply chains, it overlooks the crucial need for a sustainable and secure supply of critical materials. This oversight threatens the success of the nation’s sustainable transition due to limited resilience and dependencies on geopolitically, environmentally, and socially sensitive international sourcing, particularly from China. This thesis examines the key considerations for the US to secure a sustainable supply of these materials, hypothesizing that a comprehensive policy framework integrating sustainable practices, domestic production incentives, and international cooperation can effectively reduce risks and externalities. Methods include empirical and case studies that highlight specific challenges such as permitting delays and dependency on foreign minerals, alongside economic models analyzing the impacts of these dependencies and market dynamics. Industry roundtables provide insights into prospective innovations and recent trends in the industry. Findings indicate significant market outlook uncertainty, critical dependence on imports, and significant limitations and inertia for new domestic resources development. The thesis proposes a policy framework aimed at addressing these deficiencies to support the U.S. in leading the global transition to sustainable technologies. Recommendations focus on enabling domestic production increase through better regulation and innovation, adopting sustainable practices, and diversifying supply chains to enhance resilience. This framework is crucial for policymakers, industry stakeholders, and academics involved in shaping a resilient U.S. energy strategy.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156939</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure Function Relation of Porous 2D Material via SGCMC Simulation and Statistical Models</title>
<link>https://hdl.handle.net/1721.1/156938</link>
<description>Structure Function Relation of Porous 2D Material via SGCMC Simulation and Statistical Models
Wanichkul, Athikom
To improve the design for structural resilience and reduced environmental impact, we need to make the structure function relation of concrete more accurate, accessible, and cost-effective. First, we formulate and implement the Semi-Grand Canonical Monte Carlo (SGCMC) simulation for fracture mechanics, which is a stochastic method that is capable of capturing both the initiation and the propagation of fractures in a medium. We then optimize the performance of our SGCMC simulation to reduce its time complexity from O(n²·³⁸) to O(n¹·²⁴) and its space complexity from O(n²) to O(n). The key step to performance optimization is exploiting the sparsity of the stiffness matrix. We also deploy our code to run multiple simulations concurrently on a super-computing infrastructure to achieve scalability. Then, we try to achieve an even more accessible and cost-effective structure function relation by applying statistical modeling to predict the strength of a two-dimensional porous material without running the simulation. We generate samples by randomly placing circular pores with radii drawn from a log-normal distribution until we reach the target porosity and run our SGCMC simulations on the generated samples to create a data set to train our statistical models. We defined several parameters, including the two-point correlation function, the multi-scale disorder index, the distribution of pore radius as recovered by Circle Hough Transformation (CHT), and the area moments of the pores to parameterize the porous geometry of the samples beyond the porosity, which is a well-known and very important parameter. We found our best model to be a Gradient Boosting Decision Trees (GBDT) regression model, whose out-of-sample R2 is 0.904, as opposed to the baseline model of linear regression with the porosity, whose out-of-sample R2 is 0.752.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156938</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mucins regulate virulence and colonization factors in Streptococcus pneumoniae</title>
<link>https://hdl.handle.net/1721.1/156937</link>
<description>Mucins regulate virulence and colonization factors in Streptococcus pneumoniae
Bath, Jade Rose
Mucus covers all wet epithelia in the human body, creating a protective barrier for the underlying cell layer, and accommodating trillions of microbes that make up the microbiota. Mucin glycoproteins, the main gel-forming component of mucus, have emerged as multifaceted regulators of microbial physiology and microbial communities. Defects in mucus production or changes in mucin glycosylation are associated with microbial dysbiosis, where the outgrowth of opportunistic pathogens threatens human health. Streptococcus pneumoniae is a ubiquitous opportunistic pathogen, able to both asymptomatically colonize the microbiota of healthy children and adults and to cause invasive diseases. The mechanisms by which the body tolerates the presence of S. pneumoniae as part of the microbiota remain largely unknown. In this thesis, I fill this gap by exploring how S. pneumoniae senses and responds to the mucin environment. First, I identify that mucins downregulate a key virulence factor of S. pneumoniae, the cytolytic toxin pneumolysin (PLY). I show that through the regulation of PLY, mucin protects host cells from toxin-mediated killing and modulates inflammatory signals. Second, I identify that mucins downregulate colonization factors in S. pneumoniae, modulating microbe-microbe interactions between nasopharyngeal bacteria. Together, these results uncover novel mechanisms for how mucin tames opportunistic pathogens and provides insight for the development of novel therapeutics to treat S. pneumoniae infection.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156937</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ecological forces affecting microbial eukaryotes in the&#13;
coastal ocean</title>
<link>https://hdl.handle.net/1721.1/156936</link>
<description>Ecological forces affecting microbial eukaryotes in the&#13;
coastal ocean
Gomez, Annika L.
Marine microbial eukaryotes (protists) play a central role in the global biogeochemical cycles. Protist communities comprise carbon-fixing eukaryotic phytoplankton, which comprise the base of the marine food web, heterotrophic protists, which are predators of other microbes, and mixotrophs, which engage in a combination of these nutritional modes. The total abundance of a protistan population at any given time relies upon a combination of growth and death rates, which are impacted by nutrient availability (bottom-up control) and predation (top-down control). In this thesis, I investigate the effect of specific top-down and bottom-up forces at fine scales of time, location, and taxonomy, uncovering mechanisms by which nutrient limitation and viral infection affect marine protistan communities. In the first study, I leverage the 93-day Nahant Time Series to examine the dynamics and ecology of viruses infecting marine protists, the majority of which have only been identified by culture-independent means. This study focuses on Nucleocytoviricota, a diverse group of eukaryote-infecting dsDNA viruses with known potential to influence host metabolism and nutrient cycling. I developed a novel metagenomic sequence analysis pipeline which resolves cohesive populations of Nucleocytoviricota based on daily dynamics. Virus populations exhibit rapid and extensive fluctuations throughout the time series, mirroring the dynamics of their hosts. Diversity and structure of populations is indicative of viral ecology, with large networks of overlapping viruses and hosts suggestive of broad host range of some viruses while sharp population boundaries suggest viruses with narrow host ranges. In the second study, I investigate the role of bottom-up control, describing the effects of nutrient limitation on phytoplankton sinking velocity. We measure single-cell buoyant mass using a suspended microchannel resonator (SMR). Buoyant mass directly relates to sinking velocity through Stoke’s law. We show that sinking velocity can be modulated by nutrient limitations via the accumulation of carbohydrates which increase cell density. These results demonstrate that in addition to cell growth, nutrient limitation can also affect vertical stratification within phytoplankton populations. The combined conclusions of these chapters demonstrate novel mechanisms by which top-down and bottom-up forces shape marine protistan communities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156936</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Indexing Efficiency for Approximate Nearest Neighbor Search in High-dimensional Vector Databases</title>
<link>https://hdl.handle.net/1721.1/156935</link>
<description>Understanding Indexing Efficiency for Approximate Nearest Neighbor Search in High-dimensional Vector Databases
Qin, Yuting
Deep learning has transformed almost all types of data (e.g., images, videos, documents) into high-dimension vectors, which in turn forms Vector Databases as the data engines of various applications. As a result, queries on vector databases have become the cornerstone for many important online services, including search, eCommerce, and recommendation systems. In a vector database, the major operation is to search the &#119896; closest vectors to a given query vector, known as &#119896;-Nearest-Neighbor (&#119896;-NN) search. Due to massive data scale in practice, Approximate Nearest-Neighbor (ANN), which builds a search index offline to accelerate search online, is often used instead. One of the most promising ANN indexing approaches is the graphbased approach, which first constructs a proximity graph on the dataset, connecting pairs of vectors that are close to each other, then traverse the proximity graph for each query to find the closest vectors to a query vector. The search performance, in terms of the scope of traversal that leads to convergence, is highly dependent on the quality of the graph. There exist lots of prior work on improving the graph quality with various heuristics. However, no analysis or modeling work has been done to quatitatively evaluate the heuristics and their impact on the performance. Hence, it is unclear how to pick or combine the right heuristics to build a high-quality graph. This thesis aims to establish this connection to fill the gap. The key challenge in quantifying the heuristics is the complex tradeoff between the search accuracy and search speed, which makes it almost impossible to establish an analytical model. To this end, we propose to leverage machine learning as the modeling tool. We first build an unified framework to characterize various graph building heuristics, by decoupling the graph construction and search phases. We then extract graph attributes (e.g., diameter), and collect ground-truth performance data (e.g., search speed and accuracy) within our framework, across multiple datasets and graph configurations. Based on the collected data, we train a linear regression model to predict the search performance. We show experimental results on our model performance, and also discuss the implications on selecting heuristics that improve the quality of the indexing graphs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156935</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A neural clock underlying the temporal dynamics of an auditory memory</title>
<link>https://hdl.handle.net/1721.1/156934</link>
<description>A neural clock underlying the temporal dynamics of an auditory memory
Bahle, Andrew H.
Imitation is an essential hallmark of intelligent systems. Children imitate the speech, body language and expressions of adults, eventually graduating to creative expressions of their own individual thoughts and ideas. In machine intelligence and A.I., large language models have recently demonstrated a striking ability to convincingly imitate written forms of human language, from observation of massive corpora of text. A fundamental question is how these varied intelligent systems achieve such robust imitation. In animals, imitation is accomplished by complex neural circuits in the brain. To perform imitation, animals must first represent the sensory consequences of the action to be imitated and store this representation as a memory. Next, they must recall this sensory memory, evaluating their imitation attempts until a satisfactory match is achieved. In this thesis I study the neural control of vocal imitation in the songbird Taengiopia guttata, focusing on the first stage of imitation when animals must form a temporally structured sensory memory, or template, of the action to be imitated. In the first chapter, I present work attempting to localize the brain regions involved in the formation of the sensory memory used in imitation. We provide evidence that HVC, a pre-motor region that controls the timing of adult song, is involved in storing the timing of the tutor memory. This works shows how focal cooling can be used to study the formation of temporally structured memories even in the absence of overt behavior. In chapter 2, we ask what neural dynamics support the observed effect of cooling on the imitation. Using freely moving calcium imaging and head-fixed high-throughput electrophysiology, we show that tutoring evokes sparse sequential activity in HVC, reminiscent of its activity during adult production of the vocal imitation. This activity was present as early as the very first day of tutoring, perhaps indicating that HVC connectivity is innately predisposed to produce sparse sequential representations of song. In the final chapter, we explore changes in the representation of the tutor song before and after tutoring. We observe the emergence of tutor selective neural responses in HVC after tutoring and quantify this selectivity at the population level and in different cell-types. We further show that this tutor song selectivity is stronger in HVC than any of its auditory inputs, suggesting that tutor song selectivity results from the storage of a tutor memory in HVC itself. Together this work shows how HVC neural dynamics can act as a clock for the storage and recall of an auditory memory and gives insight into how memories containing temporal structure might be stored more broadly.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156934</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The cognitive and neural basis of complex decision-making in the primate brain</title>
<link>https://hdl.handle.net/1721.1/156933</link>
<description>The cognitive and neural basis of complex decision-making in the primate brain
Ramadan, Mahdi F.
A longstanding question at the intersection of comparative psychology, cognitive ethology, and cognitive neuroscience is what cognitive strategies primates use to tackle complex multi-step decisions, and what are the neural underpinnings of the strategies. Traditionally, cognitive experiments come in two broad flavors. In one flavor, sophisticated tasks thought to invoke high-level cognitive strategies are used, but their complexity precludes them from rigorous quantitative modeling, leading to mixed interpretations. In another flavor, very simple tasks are used, which have afforded detailed characterization of behavior and the underlying neurobiology, but are limited in eliciting high-level cognitive strategies. In this thesis, I capitalize on both traditions. In the first chapter, I present a novel multi-step decision-making task that was sufficiently complex to allow for multiple strategies, ranging from basic heuristics to more optimal strategies, but simple enough to accommodate quantitative modeling. I then use a series of human psychophysical experiments to quantitatively show that humans rely on a heuristic hierarchical strategy to solve the task due to attentional constraints, and when uncertain, flexibly revise their decisions in a computationally rational manner. In chapter two, I train two monkeys on the task and find that monkeys also adopt a hierarchical and revision strategy to solve the task, like humans. Monkeys were also able to readily generalize their strategy to novel scenarios and made eye-movements that were indicative of simple forms of counterfactual reasoning. However, it was difficult from behavior alone to test whether monkeys were actually using multiple different strategies to solve the task. To investigate this possibility and the underlying neurobiology of hierarchical and revision strategies, in chapter three we conducted high-density neural recordings from monkeys while they performed the task. Neural recordings revealed that monkeys were indeed not using one strategy to solve the task, but rather showed the initialization and dynamic progression of two distinct cognitive strategies that monkeys adaptively selected for different scenarios. We find that neural population initial conditions and response dynamics were flexibly modulated to implement these distinct decision-making strategy plans. Finally, we use the neurally inferred strategies to build composite psychophysical models that better capture the monkeys’ behavior. These results point to the importance of detailed neural recordings in combination with quantitative behavioral modeling for understanding primate cognition.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156933</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming the Expressivity-Efficiency Tradeoff in Program Induction</title>
<link>https://hdl.handle.net/1721.1/156932</link>
<description>Overcoming the Expressivity-Efficiency Tradeoff in Program Induction
Acquaviva, Samuel
People are incredibly flexible and efficient inductive reasoners. On the other hand, current approaches in program synthesis show strong domain-specific performance, but are both less sample-efficient and less flexible. Large language models improve upon this sample-efficiency and domain-generality, but lack robustness and still fall far short of people and traditional approaches on difficult induction tasks. In this thesis, we propose two hypotheses for how people seemingly overcome this trade-off between flexibility and efficiency. In the first, we propose that people may operate over an incredibly vast language which is made tractable via a strong, bottom-up proposal model. In the second, we propose that, alternatively, people may relax the necessity of such a strong proposal model by learning task-specific reasoning languages through experience. We build models operationalizing both hypotheses and show that they can improve the generality and efficiency of previous models.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156932</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring connections between seagrass ecosystem services and meadow hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/156931</link>
<description>Exploring connections between seagrass ecosystem services and meadow hydrodynamics
Schaefer, Rachel
Meadows of aquatic vegetation, such as seagrass, modify the flow of water and transport of sediment in the environment. The hydrodynamic drag generated by a seagrass meadow contributes to the numerous ecosystem services it provides, which includes quiescent habitat for other species, wave damping, water quality enhancement, and carbon sequestration. This thesis reports on a series of studies using physical experiments, simulations, and field measurements to relate the interactions between seagrass, waves, currents, and sediment to two ecosystem services, wave dissipation and carbon sequestration.&#13;
&#13;
First, laboratory studies and simulations were used to explore how plants interact with waves and currents with the goal of predicting wave dissipation and turbulence generation. The flexibility of a plant is critical in defining its interactions with the environment. Seagrass plants deflect under currents, which streamlines the plants and reduces the parts of the plants directly experiencing the flow, and sway under waves, which reduces the relative motion between the plants and the flow. These responses, known as reconfiguration, reduce the drag seagrass plants experience compared to rigid plant of the same length. Laboratory flume and numerical experiments showed that the relative magnitudes of current and wave velocities determine the influence of reconfiguration on drag, and therefore on seagrass-induced wave attenuation and turbulence. For more flexible leaves, defined as having a ratio of drag force to restoring force due to stiffness greater than 100, drag reduction due to current-induced deflection competes against drag augmentation due to lower relative motion, such that enhancing current speeds reduces wave energy dissipation only when the current velocity is less than one-third of the maximum wave velocity. For stiffer leaves, drag augmentation dominates drag reduction so that adding a current enhances wave energy dissipation. Meanwhile, the measured effects of reconfiguration on plant-generated turbulence were used to propose a hybrid analytical model for predicting the turbulence to account for the relative contributions of waves and currents.&#13;
&#13;
Second, field experiments were performed in three Massachusetts, USA seagrass meadows to relate spatial patterns in hydrodynamics with spatial patterns in sediment organic carbon. Lower velocities were expected to reduce sediment mobility and thus enhance the deposition and retention of sediment carbon. At a wave-dominated continuous meadow, results showed decreasing sediment carbon accretion rates with increasing wave velocities, which could be predicted by accounting for seagrass-induced wave damping and wave shoaling. However, at a current-dominated lagoonal continuous meadow, sediment carbon increased with increasing tidal velocities. The spatial reduction in sediment carbon at the latter site was attributed to spatial diminishment of sediment supply with increasing distance into the meadow, away from the lagoon inlet. Lastly, in a patchy current-dominated meadow the spatial variability in sediment carbon stocks did not correlate with the spatial distribution of patches. One vegetated patch showed substantially higher sediment carbon than the rest of the meadow, which was attributed to the recent persistence of the specific patch. In addition, preliminary results for a field study comparing different methods of estimating net ecosystem carbon exchange in a seagrass meadow are also presented.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156931</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shafting and its fittings</title>
<link>https://hdl.handle.net/1721.1/156872</link>
<description>Shafting and its fittings
Lewis, Theo. J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1876
</description>
<pubDate>Sat, 01 Jan 1876 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156872</guid>
<dc:date>1876-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Merrimac ore, burleigh tunnel ore</title>
<link>https://hdl.handle.net/1721.1/156871</link>
<description>Merrimac ore, burleigh tunnel ore
Shockley, W. H.,
            1855-1925.; Oxnard, Benjamin A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156871</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for a water works in a public park</title>
<link>https://hdl.handle.net/1721.1/156870</link>
<description>Design for a water works in a public park
Boyden, Amos J.,
            1853-
Thesis: B.S., Massachusetts Institute of Technology, Department of Architecture, 1875
</description>
<pubDate>Fri, 01 Jan 1875 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156870</guid>
<dc:date>1875-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report of a three weeks' experience in the Quarlz Mills at Grass Valley, Cal.</title>
<link>https://hdl.handle.net/1721.1/156869</link>
<description>Report of a three weeks' experience in the Quarlz Mills at Grass Valley, Cal.
Locke, Brad. H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1872
</description>
<pubDate>Mon, 01 Jan 1872 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156869</guid>
<dc:date>1872-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of the stray power in Edison dynamos</title>
<link>https://hdl.handle.net/1721.1/156868</link>
<description>Investigation of the stray power in Edison dynamos
Garrison, Charles.; Greer, Medorem W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1891
</description>
<pubDate>Thu, 01 Jan 1891 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156868</guid>
<dc:date>1891-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a single track, iron, Warren Girder Rail Road Bridge</title>
<link>https://hdl.handle.net/1721.1/156867</link>
<description>Design of a single track, iron, Warren Girder Rail Road Bridge
Howard, C. P.
            (Charles P.)
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156867</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Murphy-Whipple Truss</title>
<link>https://hdl.handle.net/1721.1/156866</link>
<description>A Murphy-Whipple Truss
Sweetser, Arthur W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156866</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Description of a design for a single track railroad bridge of 190 feet span</title>
<link>https://hdl.handle.net/1721.1/156865</link>
<description>Description of a design for a single track railroad bridge of 190 feet span
Shaw, Edward S.
            (Edward Stone)
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156865</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designs and estimates of a Murphy-Whipple Truss</title>
<link>https://hdl.handle.net/1721.1/156864</link>
<description>Designs and estimates of a Murphy-Whipple Truss
Perkins, H. B.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156864</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Linville Bridge</title>
<link>https://hdl.handle.net/1721.1/156863</link>
<description>The Linville Bridge
Emerson, J. S.
            (Joseph S.)
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156863</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linville Truss</title>
<link>https://hdl.handle.net/1721.1/156862</link>
<description>Linville Truss
Holbrook, E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156862</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A double Warren girder</title>
<link>https://hdl.handle.net/1721.1/156861</link>
<description>A double Warren girder
Doane, G. E.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156861</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for a post truss bridge</title>
<link>https://hdl.handle.net/1721.1/156860</link>
<description>Design for a post truss bridge
Blunt, William T.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156860</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The post iron truss bridge</title>
<link>https://hdl.handle.net/1721.1/156859</link>
<description>The post iron truss bridge
Barrows, Herbert.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1874
</description>
<pubDate>Thu, 01 Jan 1874 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156859</guid>
<dc:date>1874-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An initial design procedure for the motion analysis of flexible marine risers</title>
<link>https://hdl.handle.net/1721.1/156858</link>
<description>An initial design procedure for the motion analysis of flexible marine risers
Jones, Hobart Todd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1987; Bibliography: leaves 262-264.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156858</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calculations for the platform of a suspension bridge with special reference to the cross girders</title>
<link>https://hdl.handle.net/1721.1/156857</link>
<description>Calculations for the platform of a suspension bridge with special reference to the cross girders
Howland, A. H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1871
</description>
<pubDate>Sun, 01 Jan 1871 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156857</guid>
<dc:date>1871-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for a bowstring bridge</title>
<link>https://hdl.handle.net/1721.1/156856</link>
<description>Design for a bowstring bridge
Dodge, William Baldwin.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1872
</description>
<pubDate>Mon, 01 Jan 1872 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156856</guid>
<dc:date>1872-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A report upon a Howe truss</title>
<link>https://hdl.handle.net/1721.1/156855</link>
<description>A report upon a Howe truss
Shepard, W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1872
</description>
<pubDate>Mon, 01 Jan 1872 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156855</guid>
<dc:date>1872-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design for wrought iron railway bridge</title>
<link>https://hdl.handle.net/1721.1/156854</link>
<description>Design for wrought iron railway bridge
Allen, C. Frank
            (Calvin Frank),
            1851-1948.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1872
</description>
<pubDate>Mon, 01 Jan 1872 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156854</guid>
<dc:date>1872-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the composition of the acid oxalates of potassium, ammonium and sodium</title>
<link>https://hdl.handle.net/1721.1/156853</link>
<description>On the composition of the acid oxalates of potassium, ammonium and sodium
Nichols, Wm. Ripley
            (William Ripley),
            1847-1886.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemistry, 1869
</description>
<pubDate>Fri, 01 Jan 1869 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156853</guid>
<dc:date>1869-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic stability testing with a wind tunnel magnetic model suspension system</title>
<link>https://hdl.handle.net/1721.1/156852</link>
<description>Dynamic stability testing with a wind tunnel magnetic model suspension system
Tilton, Edward Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaf 28).
</description>
<pubDate>Tue, 01 Jan 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156852</guid>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A user-friendly interface for a poisson-solver</title>
<link>https://hdl.handle.net/1721.1/156851</link>
<description>A user-friendly interface for a poisson-solver
Johnson, Ted C.
            (Ted Christian)
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Bibliography: leaf 98.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156851</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperative research--improving university-industry joint efforts</title>
<link>https://hdl.handle.net/1721.1/156850</link>
<description>Cooperative research--improving university-industry joint efforts
Jones, Ruth J.
            (Ruth Jiling)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Bibliography: leaves 69-70.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156850</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal evaluation of selected ablative materials in transient, low-heat flux environments</title>
<link>https://hdl.handle.net/1721.1/156849</link>
<description>Thermal evaluation of selected ablative materials in transient, low-heat flux environments
Marques, Joseph Peter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1983; "CSDL-T-809."; Includes bibliographical references.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156849</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transformation toughening and the martensitic transformation in ZrO2</title>
<link>https://hdl.handle.net/1721.1/156848</link>
<description>Transformation toughening and the martensitic transformation in ZrO2
Coyle, Thomas William.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1985; Vita.; Bibliography: leaves 235-251.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156848</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic behavior in economic rivalry</title>
<link>https://hdl.handle.net/1721.1/156847</link>
<description>Strategic behavior in economic rivalry
Fudenberg, Drew.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1981; Includes bibliographies.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156847</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An examination of surgical scheduling policies.</title>
<link>https://hdl.handle.net/1721.1/156846</link>
<description>An examination of surgical scheduling policies.
Hill, Claire Louise.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Includes bibliographical references.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156846</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information and distortion in filtering theory.</title>
<link>https://hdl.handle.net/1721.1/156845</link>
<description>Information and distortion in filtering theory.
Galdos, Jorge Ignacio.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1975; Vita.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156845</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gradability in Count Nouns: Categorizing and Counting Part and Whole Objects in Children and Adults</title>
<link>https://hdl.handle.net/1721.1/156840</link>
<description>Gradability in Count Nouns: Categorizing and Counting Part and Whole Objects in Children and Adults
Sanchez, Karissa
Some of the first words that children can comprehend and produce are nouns like ball and fork. Despite this apparent early command, children deviate from adult-like behavior when categorizing and quantifying objects falling under noun descriptions. Even beyond four years of age, when they are asked to count the Xs given a set of objects that includes whole objects that fall under the noun and detached parts of objects that do, they have a tendency to count the individual partial objects as if they were wholes. Prior accounts attribute this difference to either a child's nascent numerical and quantificational abilities or to their semantic and pragmatic understanding of nominal label usage. These accounts are informed by experiments which varyingly probe categorization, counting, and quantification. However, no account can fully explain the data across all experiments, making it difficult to adjudicate between them. In this thesis, I propose a new approach to analyzing the deviation in child and adult behavior by considering how both nominal and quantificational abilities could influence it. We design a novel paradigm that examines the same children’s categorization of partial objects under noun labels and their numerical judgements about the items they had just categorized. This paradigm allows us to pinpoint where the cause of the deviation in child-like and adult-like behavior lies. Is it due to a difference in understanding nominal usage, their ability to quantify items, or both?  Ultimately, we find evidence that both nominal usage and quantificational abilities could be contributing to the deviation in behavior. We also suggest that in addition an overly flexible standard of application for count nouns, children's lack of granularity in numerical measurements could be causing them to count partial objects as wholes. For instance, children might be less adept than adults at accessing measurements between 0 and 1 such as half an X, causing them to count partial objects under a noun label as one such object.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156840</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic Design and Optimization for Quantum Computation with a Qubit-Oscillator System</title>
<link>https://hdl.handle.net/1721.1/156839</link>
<description>Algorithmic Design and Optimization for Quantum Computation with a Qubit-Oscillator System
Mintzer, Gabriel L.
Quantum computation has long been dominated by a digital approach using the qubit, which exists in a two-dimensional vector space, as its basic unit.  More recently, there has been increasing interest in an analog approach, which uses as its basic unit a qudit in an infinite-dimensional vector space.  Alongside these two approaches is a third less-studied approach, that of combining digital and analog quantum computation.  This approach is perhaps best exemplified by, and most researched via, the system of a qubit coupled to a quantum harmonic oscillator, which has been realized with many of the leading platforms for quantum computation.  In this thesis, we ask how machine learning and other high-level computational techniques can be employed in the design of applications of a qubit-oscillator system to implementing fundamental components of quantum technology.  In order to begin to answer this question and lay the groundwork for future investigation, both with this system and with others, we demonstrate the application of such high-level computational techniques toward addressing the problems of quantum compilation, quantum sensing, and quantum error-correction with the qubit-oscillator system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156839</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating Targetable Genetic Vulnerabilities in Relapsed/Refractory Diffuse Large B-cell Lymphoma</title>
<link>https://hdl.handle.net/1721.1/156838</link>
<description>Elucidating Targetable Genetic Vulnerabilities in Relapsed/Refractory Diffuse Large B-cell Lymphoma
Li, Audrey
Diffuse large B-cell lymphoma (DLBCL), the most prevalent form of non-Hodgkin lymphoma is marked by significant heterogeneity in its morphology, genetic irregularities, and clinical behavior. Current prognostic tools, including the International Prognostic Index and cell-of-origin transcriptional classifications such as germinal center B-cell-like and activated B-cell-like, do not adequately reflect DLBCLs complex nature. Front-line standard of care treatment predominantly consists of a regimen with cyclophosphamide, doxorubicin, prednisone, rituximab, and vincristine (R-CHOP); however, the relapse rate remains high, underscoring the need for improved diagnostic and therapeutic methods. In this comprehensive analysis, we investigated the genetic substructure of DLBCL in both newly diagnosed and relapsed/refractory cases, focusing on genetic abnormalities pertinent to relapsed settings and the immune microenvironment’s influence on therapy response. Our f indings revealed significant enrichment of specific genetic clusters, notably clusters 2 and cluster 5, which are associated with an inferior prognosis and high relapse rates following R-CHOP therapy. These clusters were characterized by distinct genetic alterations, including prevalent mutations in TP53, BCL2, and MYD88. The results of this study suggest that integrating detailed genetic profiling into the clinical management of DLBCL could significantly refine therapeutic approaches, tailoring them to the unique genetic backdrop of each patient’s disease. This approach promises to enhance the precision of prognostic assessments and the efficacy of subsequent therapeutic interventions, paving the way for personalized medicine in the treatment of DLBCL.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156838</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Causal Inference and Attribute Prediction Through Visual Information</title>
<link>https://hdl.handle.net/1721.1/156837</link>
<description>Improving Causal Inference and Attribute Prediction Through Visual Information
Chau, Eileen
Causal inference is an active area of research in computer science and statistics as it is used to understand casual conclusions that traditional statistics cannot. A naive way to conclude the cause of an outcome is by using correlations, but this is not always accurate because there may be other variables that indirectly affect an outcome. Causal inference aims to find the root cause by considering those variables called confounders. Frequently, confounding variables are attributes in existing data, but sometimes they can be missing from the existing data. In those cases, data analysts have to look for confounders from outside sources such as tables, knowledge graphs, and text. Our focus is to look for confounding variables from visual data such as videos and images. Discovering confounders from visual data is a challenge because videos and images are unstructured unlike tables and graphs. Thus, it is difficult to identify features and also extract them from visual data. Additionally, the identified and extracted features must be relevant to the casual question being studied. With the recent advancement in visual language models (VLMs) such as GPT-4V(ision), VLMs can provide a versatile solution to the confounder discovery and feature extraction problem when using visual data. This thesis proposal investigates confounder discovery, feature extraction, and casual inference from visual data by utilizing the power of VLMs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156837</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building a Distributed Transaction Processing System Using DARQ</title>
<link>https://hdl.handle.net/1721.1/156836</link>
<description>Building a Distributed Transaction Processing System Using DARQ
Zhu, Ophelia Min
Building distributed transaction processing systems in the context of cloud microservices poses challenges related to fault tolerance, resilience, and composability. Composable Resilient Steps (CReSt) and its implementation, Deduplicated Asynchronously Recoverable Queues (DARQ), provide an abstraction to address these challenges by separating application logic from resilience mechanisms. This thesis explores the performance and usability of DARQ through the development of a distributed transaction processing system. DARQ is evaluated by performance on the YCSB and TPCC benchmark and by the ease of programming with it. The abstraction of CReSt and DARQ, while requiring additional setup, simplifies the programming for fault-tolerant applications and provides performance optimizations out of the box compared to a standard baseline implementation, enabling a 6.89x speedup for TPCC. The abstraction reduced the amount of logic needed in components that required persistence, namely the write-ahead log and two-phase commit protocol. As complex systems compose on one another, DARQ can be a useful abstraction for developers to simplify their application logic whilst providing fault-tolerance and performance optimizations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156836</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of Optimized Architected Reef Design in Random Oscillatory Motion for Maximized Wave Energy Dissipation and Coastal Preservation</title>
<link>https://hdl.handle.net/1721.1/156835</link>
<description>Evaluation of Optimized Architected Reef Design in Random Oscillatory Motion for Maximized Wave Energy Dissipation and Coastal Preservation
Sinha, Anjali
The mitigation of exacerbated coastal erosion and reef degradation warrants thorough examination and enhancement of existing coastal defense strategies. Severe threats to ecosystems, communities, and infrastructure from climate change, including rising sea levels and intensified weather events, necessitate the development of new technologies for protection and damage prevention. The focus of this research is to inform optimization efforts for the design of an architected reef structure aimed at maximizing wave energy dissipation when placed under various real-world environmental conditions. By testing reef structures in sea storm conditions with random oscillatory motion, this study aims to assess the effectiveness of the architected reefs in mitigating the adverse effects of wave energy. Validating the performance of reef structures in random wave motion, as compared to regular, sinusoidal motion, will improve testing efficiency, advancing the development of sustainable and resilient solutions for future coastal preservation efforts.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156835</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A System to Exploit Symmetry in Common Tensor Kernels</title>
<link>https://hdl.handle.net/1721.1/156832</link>
<description>A System to Exploit Symmetry in Common Tensor Kernels
Patel, Radha
Symmetric tensors arise naturally in many domains including linear algebra, statistics, physics, chemistry, and graph theory. Symmetry arises through both mathematical properties and scientific phenomena. Taking advantage of symmetry in matrices saves a factor of two, but taking advantage of symmetry in a tensor of order n can save a factor of n! in memory accesses and operations. However, implementing this symmetry by hand significantly increases the complexity; for instance, leveraging symmetry in 2D BLAS nearly doubles the implementation burden, and this burden escalates further in the case of higher-dimensional tensors. Existing compilers to compute those kernels either do not take advantage of symmetry or do not take advantage of it to the extent possible. My thesis will identify and categorize methods to exploit symmetry in common and uncommon tensor kernels. We will depict a methodology to systematically generate and optimize symmetric code and will present a compiler in Julia that automates this process. Our symmetric implementation demonstrates significant speedups ranging from 1.36x for SSYMV to 7.95x for a 4-dimensional MTTKRP over the naive implementation of these kernels.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156832</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing SCRAM: Privacy-Centric Approaches in Cyber Risk Measurement</title>
<link>https://hdl.handle.net/1721.1/156831</link>
<description>Advancing SCRAM: Privacy-Centric Approaches in Cyber Risk Measurement
Magrefty, David S.
The Secure Cyber Risk Aggregation and Measurement (SCRAM) framework allows multiple parties to compute aggregate cyber-risk measurements without the need to disclose publicly any information about their identity and their personal data. The framework, through the use of Multi-Party Computation (MPC) and Homomorphic Encryption (HE), guarantees each party that their participation in the computation is confidential and that the aggregated results will not be decrypted without their authorization [1]. However, the system fails to guarantee what the output of the aggregated computations reveals about their identity, their security posture, and their losses.&#13;
&#13;
In this work, we tackle the challenging problem of preserving privacy in small datasets while maximizing utility, a critical issue in the context of the SCRAM framework. We first construct a linear programming problem that demonstrates how the aggregate outputs of SCRAM do not provide adequate privacy, revealing sensitive information about individual parties. Then, we establish new privacy guarantees for the framework based on the concepts of Predicate Singling Out (PSO) and Differential Privacy (DP). These guarantees aim to protect the identity and data of the participating parties while still allowing for meaningful aggregate measurements.&#13;
&#13;
We then demonstrate the inadequacy of existing privacy solutions for small datasets and propose two novel techniques specifically designed for small datasets: integer-binary randomized response and clustering-based output perturbation. The integer-binary randomized response transforms integer inputs into binary questions, enabling the application of randomized response techniques while minimizing the impact on data utility. The clustering-based approach aggregates similar values into clusters and reports summary statistics, effectively obfuscating individual data points while preserving the overall distribution and relative magnitudes. These techniques offer a balance between privacy and utility, demonstrating the feasibility of privacy-preserving computation on small datasets.&#13;
&#13;
Our work highlights the limitations of existing privacy solutions for small datasets and the necessity of developing specialized techniques to address this challenge. The proposed methods not only enhance the privacy guarantees of the SCRAM framework but also contribute to the broader field of privacy-preserving computation, providing a foundation for future research and applications involving sensitive data aggregation and analysis in small dataset scenarios.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156831</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Emotion Vectorization Algorithm (EVA): Automated Music&#13;
Generation from Imaging and Emotion Inputs</title>
<link>https://hdl.handle.net/1721.1/156829</link>
<description>The Emotion Vectorization Algorithm (EVA): Automated Music&#13;
Generation from Imaging and Emotion Inputs
Liu, Dylan
Generative AI tools for the creative arts have become increasingly popular over the past few years. Several well-known models, such as ChatGPT and DALL-E, can even produce writing and artwork comparable to those created by human professionals. Thus, it's no surprise that many technology firms, such as OpenAI and Google, have trained models that can create music as well. These state-of-the-art models usually take in an artist or genre, and they output a song corresponding to the received inputs. However, none of these models are designed to generate music according to an \emph{emotional} input, nor are they able to generate their own styles of music (i.e. they are all trained on well-known works).&#13;
&#13;
Because music is designed to target and evoke specific feelings within the listener, we aim to produce a tool that accounts for this emotional aspect. To this end we create EVA, a new type of generative music model. EVA is the first model takes in a quantitative representation of an emotion as input and returns an instrumentalized musical performance that evokes such an emotion as output. Furthermore, without the reliance on past works of well-known composers for training data, EVA produces a unique style of music that is dissimilar to any particular artists.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156829</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Engineering of Modular Symbols</title>
<link>https://hdl.handle.net/1721.1/156828</link>
<description>Performance Engineering of Modular Symbols
Boonsiriseth, Krit
We present a new program MFSplit which computes information about newform subspaces for modular forms of weight 2 and trivial character. Modular forms are certain functions in mathematics that appear in many different subfields of mathematics, including number theory and complex analysis; newform subspaces are spaces spanned by a special type of modular forms and are, in some sense, building blocks of spaces of modular forms. Our program MFSplit is based on modular symbols, which is a formalism commonly used to compute modular forms. Existing computer algebra systems such as Sage and Magma include implementations of modular symbols. Our implementation applies the principles of performance engineering to this computational number theory problem, and MFSplit is at least 3 times faster than existing implementations. Consequently, we were able to compute information about newform subspaces for level N ≤ 50000, extending previous efforts that computed this information up to N ≤ 16000. Based on this computation, we analyze the performance characteristics of our program and generate more data related to certain conjectures in mathematics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156828</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Vision Techniques for Drill Bit Identification and Mechanical Wear Detection</title>
<link>https://hdl.handle.net/1721.1/156827</link>
<description>Computer Vision Techniques for Drill Bit Identification and Mechanical Wear Detection
Darby, Brady J.
Developments of computer vision techniques in the past decade have rapidly accumulated and enabled the application of vision systems to use cases that were once out of reach. In conjunction with standard image processing techniques, deep learning models for vision tasks have received increasing attention, and they both see considerable utility in space exploration. Specifically, real-time obstacle detection and motion planning require advanced vision logic. However, retroactive data analysis is an area with less emphasis but promising application for computer vision. This thesis project explores how both image processing and deep learning-based computer vision methods can be leveraged to analyze drill bits on board the Mars 2020 Perseverance Rover, a Jet Propulsion Laboratory (JPL) mission. The effectiveness of thresholding and segmentation on two critical tasks, drill bit identification and mechanical wear detection, is demonstrated. Then, transfer learning of convolutional neural networks (CNNs) is applied to the same tasks, allowing comparison of results. This thesis also explores a means of presenting processed image outputs to non-technical operators in order to assist manual analysis of drill bit wear state.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156827</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expectation-based comprehension of linguistic input: facilitation from visual context</title>
<link>https://hdl.handle.net/1721.1/156826</link>
<description>Expectation-based comprehension of linguistic input: facilitation from visual context
Pushpita, Subha Nawer
Context fundamentally shapes real-time human language processing, creating linguistic expectations that drive efficient processing and accurate disambiguation (Kuperberg and Jaeger, 2016). In naturalistic language understanding, the visual scene often provides crucial context (Ferreira et al., 2013; Huettig et al., 2011). We know that visual context guides spoken word recognition (Allopenna et al., 1998), syntactic disambiguation (Tanenhaus et al., 1995), and prediction (Altmann and Kamide, 1999), but much about how visual context shapes real-time language comprehension remains unknown. In this project, we investigate how visual information penetrates the language processing system and real-time language understanding. Here we show that relevant visual context significantly facilitates reading comprehension, with the amount of facilitation modulated by a word’s degree of grounding in that visual context or image in our case. Our results also demonstrate that the facilitation is largely mediated by the effect of multimodal surprisal(the relative entropy induced by the word between the distributions over interpretations of the previous words in the sentence and the image). We also found that the errors that people are prone to make in reading comprehension tasks can be largely predicted by the amount of multimodal surprisal. The results also highlight the strong correlation between a word’s degree of grounding and reduction of surprisal for the presence of an image. Our work offers new possibilities for how multimodal large language models may be used in psycholinguistic research to investigate how visual context affects language processing. This work will also pioneer questions about how information processed in different modalities such as audio, video, or structured visuals like graphs and diagrams shape our upcoming linguistic comprehension or even language generation, providing fundamental theoretical insights into the understanding of the way we use language to navigate in a complex world.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156826</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Further Hardness Results for Stephen’s Sausage Roll</title>
<link>https://hdl.handle.net/1721.1/156825</link>
<description>Further Hardness Results for Stephen’s Sausage Roll
Liu, Jason
Stephen’s Sausage Roll is a relatively unstudied puzzle game with a fascinating set of mechanics for computational hardness problems. The only past results are from a class project in MIT’s 6.5440 class of Fall 2023, which only dealt with two specific subsets of the mechanics restricted to two-dimensional forms of the game [1]. This project presents a more complete characterization of problems based off of Stephen’s Sausage Roll, and provides solutions for a significant portion. In particular, both variants of Stephen’s Sausage Roll considered in prior work can be solved by one of these results.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156825</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Augmenting Inputs using a Novel Figure-to-Text Pipeline to Assist Visual Language Models in Answering Scientific Domain Queries</title>
<link>https://hdl.handle.net/1721.1/156824</link>
<description>Augmenting Inputs using a Novel Figure-to-Text Pipeline to Assist Visual Language Models in Answering Scientific Domain Queries
Gupta, Sejal
Recent advancements in visual language models (VLMs) have transformed the way we interpret and interact with digital imagery, bridging the gap between visual and textual data. However, these models, like Bard, GPT4-v, and LLava, often struggle with specialized fields, particularly when processing scientific imagery such as plots and graphs in scientific literature.&#13;
&#13;
In this thesis, we discuss the development of a pioneering reconstruction pipeline to extract metadata, regenerate plot data, and filter out extraneous noise like legends from plot images. Ultimately, the collected information is presented to the VLM in structured, textual manner to assist in answering domain specific queries. The efficacy of this pipeline is evaluated using a novel dataset comprised of scientific plots extracted from battery domain literature, alongside the existing benchmark datasets including PlotQA and ChartQA. Results about the component accuracy, task accuracy, and question-answering with augmented inputs to a VLM show promise in the future capabilities of this work.&#13;
&#13;
By assisting VLMs with scientific imagery, we aim to not only enhance the capabilities of VLMs in specialized scientific areas but also to transform the performance of VLMs in domain specific areas as a whole. This thesis provides a detailed overview of the work, encompassing a literature review, methodology, results, and recommendations for future work.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156824</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adapting Transformer Encoder Architecture for Continuous Weather Datasets with Applications in Agriculture, Epidemiology and Climate Science</title>
<link>https://hdl.handle.net/1721.1/156822</link>
<description>Adapting Transformer Encoder Architecture for Continuous Weather Datasets with Applications in Agriculture, Epidemiology and Climate Science
Hasan, Adib
This work introduces WeatherFormer, a transformer encoder-based model designed to robustly represent weather data from minimal observations. It addresses the challenge of modeling complex weather dynamics from small datasets, which is a bottleneck for many prediction tasks in agriculture, epidemiology, and climate science. Leveraging a novel pretraining dataset composed of 39 years of satellite measurements across the Americas, WeatherFormer achieves state-of-the-art performance in crop yield prediction and influenza forecasting. Technical innovations include a unique spatiotemporal encoding that captures geographical, annual, and seasonal variations, input scalers to adapt transformer architecture to continuous weather data, and a pretraining strategy to learn representations robust to missing weather features. This thesis for the first time demonstrates the effectiveness of pretraining large transformer encoder models for weather-dependent applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156822</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Benthic: Designing Relational Traversal Structures to Enhance Diagram Accessibility</title>
<link>https://hdl.handle.net/1721.1/156821</link>
<description>Benthic: Designing Relational Traversal Structures to Enhance Diagram Accessibility
Mei, Catherine
Diagrams are data structures for problem-solving and communication because they allow users to formalize and analyze complex concepts through spatial relations. However, their visual nature presents significant accessibility challenges for blind and low-vision users who rely on screen readers. Existing methods for making diagrams accessible often fall short, providing only superficial overviews and lacking detailed, navigable structures. This paper introduces Benthic, a system for generating intermediate representations and depicting relational information in diagrams. Benthic provides an interface that allows screen reader users to navigate the diagram data structure. Benthic uses a hypergraph traversal structure, where diagram nodes are grouped by hyperedges that represent diagram relations. These relations are presented in the screen reader interface according to their priority (or visual salience), allowing screen reader users to traverse the information similarly to how sighted users might view the diagram. Additionally, users can explore diagrams at various levels of detail by choosing to navigate high-level relations or more detailed relations based on their needs. We evaluate Benthic’s effectiveness through three comparative case studies with existing diagram accessibility systems. Benthic aims to create a design space of traversal structures that will allow blind and low-vision users to leverage the same affordances available to sighted users, enabling intuitive interaction and comprehensive understanding of diagrams.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156821</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Soft-Rigid Robots: Investigating Series and Parallel Configurations</title>
<link>https://hdl.handle.net/1721.1/156820</link>
<description>Hybrid Soft-Rigid Robots: Investigating Series and Parallel Configurations
Sologuren, Emily R.
The diverse set of traits that soft-rigid robots possess have the potential to be applied towards a multitude of applications that require both strength and flexibility. This thesis looks at two kinds of soft-rigid robotic systems: the first is a series assembly of soft-rigid modules with stiffness modulation to form a soft-rigid robotic arm, and the second system is a parallel assembly of rigid bones casted into silicone to form a passive soft-rigid flipper for a robotic sea turtle. We first introduce a new class of soft-rigid modules that can modulate their stiffness on a continuum through tendon-driven actuation and the integration of "soft" and "rigid" components. Their serial assembly form a self-standing, soft-rigid robotic arm (SRRA). When coupled with an adapted soft PD+ controller, we generate trajectories that demonstrate the manipulator’s ability to deform for maneuvering tasks and stiffen for load-bearing tasks. The robotic sea turtle’s parallel, soft-rigid flippers emulate those of its animal counterpart. To leverage this structure for underwater locomotion, we look at a CPG-coupled reinforcement learning framework to optimize for a forward swimming gait.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156820</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Trade Off Performance and Safety in Mixed Autonomy Traffic</title>
<link>https://hdl.handle.net/1721.1/156819</link>
<description>Learning to Trade Off Performance and Safety in Mixed Autonomy Traffic
Ding, Jessica H.
With the advent of autonomous vehicles (AVs), and with the slow but steady consumer adoption of AVs on road networks, there is a newfound need to study the interactions between efficient traffic flow and driving safety in mixed autonomy traffic. Extending from reinforcement learning methods in robotic control methods and from learning methods for location-based actuators like traffic lights, this thesis considers control strategies afforded by individual AVs, which have recently seen potential for direct optimization of singular system objectives, such as traffic smoothing and emission reduction, and introduces a reinforcement learning-based methodological framework to facilitate a study of the trade offs between performance and safety at a fleet level. This investigation automatically produces Pareto frontier curves for four diverse traffic scenarios based on established mixed traffic benchmarks. The results of this study will inform decision-makers regarding inherent trade-offs in traffic control systems, and this framework can be extended to study arbitrary objectives in complex control systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156819</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modulated Frequency Multiplier Inverter</title>
<link>https://hdl.handle.net/1721.1/156818</link>
<description>Modulated Frequency Multiplier Inverter
Coston, Sarah M.
Many industrial applications such as plasma generation and wireless power transfer require high frequency power inverters (or rf power amplifiers) that are able to output a wide power range despite highly variable load reactances, while also maintaining high efficiency. Previous approaches to this problem, such as switched-mode inverters combined with tunable matching networks provide adequate, albeit bulky, costly, and complex solutions at lower HF frequencies, while at higher frequencies inefficient linear amplifiers dominate. This thesis introduces an efficient inverter (or switched-mode power amplifier) approach that can provide efficient wide-power-range control into a variable load, while being scalable to increased output frequencies compared to conventional designs. We introduce a wide-range power amplifier that uses frequency control to manage reactive load variations, and phase modulation to modulate output power, and frequency multiplication to achieve high output frequency, all while maintaining soft switching. The proposed thesis provides a preliminary development of this modulated frequency multiplier inverter, analyzing and demonstrating it functionality and effectiveness through simulation, showing its ability to achieve high output frequencies, manage wide load reactances, control power over a wide range, and maintain a high efficiency.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156818</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing Brain Regions in 2D Images from Brain Tissue</title>
<link>https://hdl.handle.net/1721.1/156817</link>
<description>Recognizing Brain Regions in 2D Images from Brain Tissue
Lohawala, Sabeen Imtiyaz
Often, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing physical characteristics like the size and shape of different regions of the brain or the presence of abnormalities such as tumors. Whereas sMRI are more commonly taken in vivo, the neuropathology of many neurodegenerative disorders, like Alzheimer’s, requires analysis of the brain post-mortem through techniques like brain dissection, necessitating the use of other imaging modalities. Various tools and deep learning models have been developed to automatically identify different anatomical structures in 3D MRI volumes. However, the only method that exists to segment the anatomical structures in 2D brain slices, whether they be 2D slices extracted from an MRI or photographs of slices from a physically dissected brain, is manually labeling by a trained neuroanatomist, which is costly, resource-intensive, and time-consuming. In this project, we develop a new deep learning method to automatically segment 50 different regions in 2D photographs of the brain. Because a supervised image and segmentation map dataset does not exist for the photographs, we train the state-of-the-art SegFormer model on a supervised dataset of 2D MRI slices. We employ multiple data augmentation techniques to increase the variability of the training data to more closely resemble the variability seen in brain photographs, so that the model is robust enough to segment the anatomical regions in brain photographs. In this project, the SegFormer model achieved test dice scores between 0.6-0.75 on the segmentation of 50 different anatomical regions in 2D MRI slices, depending on which augmentations were incorporated during training. Additionally, the project demonstrated that incorporating complex augmentations that forced the model to learn the segmentation task with reduced contextual information as well as those that decoupled the tissue and background by manipulating them independently helped improve the robustness of the model, allowing it to better segment 2D photographs of the brain. Although there is much room for improvement, this project provides a set of techniques that can be extended to further improve the model’s robustness so that it can be applied to other imaging modalities as well in the future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156817</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genomic Language Models for Protein Function and Property Prediction</title>
<link>https://hdl.handle.net/1721.1/156816</link>
<description>Genomic Language Models for Protein Function and Property Prediction
Boshar, Sam T.
In the field of natural language processing (NLP), large language models (LLMs) trained on enormous corpora of unlabeled sequence data have demonstrated state-of-the-art performance on a variety of downstream tasks. This approach is appealing because one model can be easily adapted to do well in many modalities, rather than requiring many specialized models. This same architecture has found great success modeling biological data, including protein, mRNA and genomic sequences. Representations from biological language models have also outperformed highly specialized models, especially in data-scarce scenarios. How- ever, since the genome contains all of the information encoding proteins, genomic language model (gLMs) have the potential to model DNA, RNA and proteins. In spite of this, the performance of gLMs on proteins is largely unknown due to the lack of datasets pairing proteins with their true coding sequences. In this work, we curate five such coding sequence datasets and use them to study gLMs and protein language model (pLM) performance on protein function and property prediction. We show that gLMs are competitive and even outperform their pLMs counterparts on some tasks and that they perform best using the curated true coding sequences over alternative codon sampling strategies. We perform a series of experiments to find interpretable explanations for gLM performance, and investigate architecture changes to address their shortcomings and improve the ability of gLM to represent proteins. We found that a joint genomic-proteomic architecture outperforms each individual approach, showing that they capture different, but complementary sequence representations. We identify examples of such distinct representations in a detailed analysis of their respective embedding spaces. In studying the application of gLMs to proteomics, we look to encourage further research into a unified and synergistic approach to many biological modalities.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156816</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Intermediate Representation for Expressing and Optimizing Computations in Lattice Quantum Chromodynamics</title>
<link>https://hdl.handle.net/1721.1/156815</link>
<description>An Intermediate Representation for Expressing and Optimizing Computations in Lattice Quantum Chromodynamics
Sollee III, Richard P.
The field of Lattice Quantum Chromodynamics faces massive scaling problems because of the large iteration spaces of the sums required which scale with the factorial of the number of atoms represented. The LQCD IR and rewrite system from this thesis allows tackling these scaling problems quicker and more effectively. The IR allows representing both mathematical concepts such as products and sums as well as algorithmic concepts such as precomputations. Our system requires minimal code to initialize the naive algorithm and apply effective rewrites to increase performance. This development time speedup allows trying various approaches with ease. The rewrite system allows correctness to be maintained at each step while being able to drastically change the algorithmic approach in search of better asymptotic bounds. Our approaches lead to up to 5x speedups and at worse 2x slowdowns for our most important problem, but with a better development cycle, requiring only 100s of SLOC compared to 1000s of SLOC.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156815</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Video Games for Empathy and Understanding Towards Human Migration</title>
<link>https://hdl.handle.net/1721.1/156814</link>
<description>Video Games for Empathy and Understanding Towards Human Migration
Casillas, Enrique
Video games have recently started playing a more important role in education, though there is limited research on how they can be used to generate empathy and understanding towards their subject matters. To address this limitation, we present Vida Migrante, an online interactive simulation game about the struggles of Venezuelan migrants living in Ecuador, and analyze whether or not the game can foster empathy and understanding towards the migrant experience. This study uniquely looks at how the game can communicate the findings from real migrant data in such a way that users can empathize with them. A set of 52 students at the Massachusetts Institute of Technology were surveyed and asked a series of Likert-style and open-ended questions to determine whether or not this game generated empathy and understanding towards the topic. An in-depth quantitative and qualitative analysis reveals that although respondents already had high levels of empathy and understanding, the game was able to increase those levels rather significantly. This work shows that video games like these can be used not only to increase familiarity and understanding of a humanitarian issue, but also empathy towards the data and the presented human experiences. This paper lastly contributes a discussion of the specific features of this game that allows empathy generation to occur, which may help motivate future work to create effective games that allow its players to empathize with important issues in today’s technology driven world.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156814</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Policy-Based Access Control in Federated Clinical Question Answering</title>
<link>https://hdl.handle.net/1721.1/156813</link>
<description>Policy-Based Access Control in Federated Clinical Question Answering
Chen, Alice
Retrieval augmented generation (RAG) has recently expanded large language model versatility in answering domain-specific questions using dynamic external knowledge bases, particularly demonstrating promise in assisting clinical settings. However, due to its sensitive nature, patient medical data often requires retrieval to be federated across a decentralized network of hospital institutions, each maintaining internal databases and access control policies. Applying standard RAG to clinical question-answering tasks is complicated by the lack of an interface for hospital resource owners to regulate and restrict access to sensitive clinical documents during retrieval, which is essential for model feasibility in practice. We propose to leverage federated RAG retrieval for clinical trends inference across distributed medical records while adding authorization security mechanisms during retrieval to guarantee security of patient data. We propose (i) user identity authentication administered through a trusted federation of per-hospital OpenID Connect servers, (ii) a framework for integrating policy-based access control (PBAC) security mechanisms at flexible granularity into a federated RAG system to restrict medical data access based on user role attributes, and (iii) ClinicalTrendQA, a novel dataset to evaluate model performance for synthesizing clinical trends grounded on decentralized patient EHR information. To facilitate evaluation of our authorization PBAC framework on protecting information leakage during retrieval, we additionally present a federated 3-hospital case study and demonstrate that the same ClinicalTrendQA query under different user profiles holding varying degrees of access privileges observes the expected EHR information reduction. We also analyze metrics concerning the impact of this retrieval loss on end-to-end response quality against federated insecure and centralized RAG baselines.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156813</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager</title>
<link>https://hdl.handle.net/1721.1/156812</link>
<description>Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager
Gerszberg, Nina R.
The growing importance of large language models (LLMs) in daily life has heightened awareness and concerns about the fact that LLMs exhibit many of the same biases as their creators. In the context of hiring decisions, we quantify the degree to which LLMs perpetuate biases originating from their training data and investigate prompt engineering as a bias-mitigation technique. Our findings suggest that for a given resumé, an LLM is more likely to hire a candidate and perceive them as more qualified if the candidate is female, but still recommends lower pay relative to male candidates.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156812</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Compressive Enumeration to Solve Algorithmic Tasks With Language Models</title>
<link>https://hdl.handle.net/1721.1/156810</link>
<description>Using Compressive Enumeration to Solve Algorithmic Tasks With Language Models
Wang, Annie
Large language models are useful tools for generating and synthesizing short code snippets that solve straightforward programming problems. However, their performance on more advanced code generation tasks remains limited, due to the complex algorithmic nature of these tasks. Yet, large language models are often capable of crafting nearly-correct answers to such questions; model-generated responses are prone to small errors that may render an otherwise-correct program incorrect. To address this issue, we investigate whether large language models can be combined with enumerative program synthesis techniques to build solutions to difficult algorithmic problems. This thesis presents and evaluates compressive enumeration as a strategy for improving large language model performance on code generation tasks. Given a question q and a corpus P of model-generated responses to q, compressive enumeration isolates shared code components within P; combining these components in novel ways may make it possible to generate a new solution to q. Experimentation with the Stitch library learning algorithm shows that compressive enumeration is able to generate a working solution for a small number of questions. However, its best performance is typically attained on problems that are already solvable by current large language models. This suggests that compressive enumeration has limited practical value as a code generation strategy; however, future improvements to the technique may make it more widely applicable.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156810</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Super-resolution Control of Ultracold Dipolar Atoms on a 50-nm Scale</title>
<link>https://hdl.handle.net/1721.1/156809</link>
<description>Super-resolution Control of Ultracold Dipolar Atoms on a 50-nm Scale
Du, Li
Degenerate quantum gases of magnetic atoms such as dysprosium (Dy) and erbium (Er) offer new opportunities to quantum simulation research due to their large spin degree of freedom and long-range dipole-dipole interactions. In this thesis, following an introduction to the fundamental properties of Dy, we introduce the design and construction of an experimental apparatus that is capable of producing Bose-Eintein codensates of more than 10⁵ Dy atoms in every 10 seconds. &#13;
In addition, we describe two experiments that advances the quantum control over the spin, the motion, the interaction, and the dynamics of ultracold dipolar gases.&#13;
&#13;
&#13;
In the first experiment, we introduce a super-resolution control scheme using a spin-dependent optical potential that localizes Dy atoms on a sub-50 nm scale, a distance that is more than 10 times shorter than the optical wavelength. With the interatomic distances shortened by a factor of 10, the interatomic dipole-dipole interaction is significantly enhanced. We will discuss how this strong and tunable long-range interaction enables the simulation of new classes of many-body Hamiltonians. We experimentally demonstrate the super-resolution technique by creating a bilayer of ultracold Dy atoms and mapping out the atomic density distribution with sub-10 nm resolution. The interlayer dipole-dipole interaction are detected via two out-of-equilibrium experiments.&#13;
&#13;
&#13;
In the second experiment, we study the suppression of dipolar relaxation, an inelastic process that limits the lifetime of higher spin states, using external optical confinements. By confining ultracold dysprosium atoms in ultrathin optical layers, the magnetic atoms can approach each other only side by side. The interatomic dipole-dipole repulsion provides a protective shield that stops the atoms from tunneling to short-range. We observe an order of magnitude suppression of inelastic dipolar relaxation losses in the presence of the dipolar shield. This scheme can extend the lifetime of quantum gases of spin mixtures, thereby offering more opportunities for exploring physics such as spin-orbit coupled Bose gases, dipolar spinor condensates, etc.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156809</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Optimality of Several Algorithms on Polynomial Regression of Empicial Bayes Poisson Model</title>
<link>https://hdl.handle.net/1721.1/156808</link>
<description>On the Optimality of Several Algorithms on Polynomial Regression of Empicial Bayes Poisson Model
Kang, Benjamin
The empirical Bayes estimator for the Poisson mixture model in [1], [2] has been an important problem studied for the past 70 years. In this thesis, we investigate extensions of this problem to estimating polynomial functions of the Poisson parameter rather than just the parameter itself. We generalize three different algorithms for estimation, specifically the Robbins estimator from [2], the NPMLE method from [3], and the ERM method from [4]. For each of these algorithms, we prove upper bounds on the minimax regret. We also prove a general lower bound that applies to any estimation algorithm for this setup. In addition to the theoretical bounds, we empirically simulate the performance of all three algorithms in relation to both the number of sample and the degree of the polynomial function we estimate.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156808</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale Trends in Vision Systems: Novel Methods for Identifiability</title>
<link>https://hdl.handle.net/1721.1/156807</link>
<description>Large-scale Trends in Vision Systems: Novel Methods for Identifiability
Yang, Helen
While the analogy between artificial neural networks (ANNs) and the brain have been well validated in past work, one question without a clear answer is—what causes an ANN to be more or less brain-like? A better understanding of this may lead to the discovery and implementation of more performant and human-like AI systems. However, despite ANNs having been proposed as models of primate visual systems, the success in predicting both neural and behavioral responses of primates by ANNs has not been without contention. Increasing architectural and dataset sizes bring forth concerns of black boxes (artificial systems) explaining other black boxes (human intelligence), leading to our level of understanding of the relationship between artificial and biological visual systems hitting a wall. In addition, there is increasing empirical evidence that the representations learned by artificial vision systems are convergent: artificial vision systems trained on large datasets tend to learn similar representations despite having numerous differences in architecture and training. This lack of identifiability presents a challenge to comparison pipelines commonly used to validate artificial vision systems as models of biological vision—if two artificial vision systems with different architectures have convergent representations, we are limited in our ability to reason about the structural properties of an individual artificial vision system and determine which system provides a better model of the brain. In light of these issues, we provide an analysis of current frameworks for measuring artificial and biological visual system similarity and propose a novel approach toward improving identifiability between artificial vision systems via contrastive stimuli. We show that our approach offers better identifiability between artificial vision systems compared to standard benchmarks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156807</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smarter Agents for Agent-Based Models</title>
<link>https://hdl.handle.net/1721.1/156806</link>
<description>Smarter Agents for Agent-Based Models
Kuru, Nurullah Giray
Agent-based models (ABMs) are powerful tools for decision-making due to their ability to simulate systems with individual-level granularity. Recent advances have mitigated the computational costs of scaling ABMs to real-world population sizes; however, the potential of ABMs is also constrained by the quality of the underlying data and feedback loops. We introduce two approaches to improving data quality in ABMs. First, we incorporate LLM peers in ABM simulations to guide agent decision-making and thought generation, leveraging the world model learned by LLMs. We analyze both proprietary and open-source LLMs for suitability in ABM use, and find GPT-3.5 to be a strong candidate for distinguishing between agent characteristics and producing plausible isolation decisions in an epidemic. We introduce an effective and scalable system for using LLMs in ABMs by characterizing agents using a small set of characteristics and using LLM peers to guide agent groups. We conduct experiments in a synthetic replica of the Astoria neighborhood of New York City and show that this system achieves better calibration and enables more detailed analysis. Second, we propose privacy-preserving ABMs that can integrate real agents into ABM simulations in a distributed system using cryptographic protocols. We describe algorithms for running simulations, calibration, and analysis of ABMs, and provide a proof of concept. This approach enables adding real human feedback into ABMs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156806</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inverse Constitutional AI</title>
<link>https://hdl.handle.net/1721.1/156804</link>
<description>Inverse Constitutional AI
Kostolansky, Timothy H.
The alignment of large language models (LLMs) to human values becomes more and more pressing as their scale and capabilities have grown. One important feature of alignment is understanding the preference datasets that are used to finetune LLMs. Inverse Constitutional AI (ICAI) is presented as a novel interpretability framework to discover the principles underlying preference datasets. Motivated by the Constitutional AI training paradigm of instilling principles in models, ICAI aims to extract a succinct "constitution" of natural language principles from data. This thesis contributes an initial attempt at realizing ICAI through a clustering-based methodology applied to preference datasets. The proposed approach involves embedding preference pairs into vector representations, clustering the embeddings to group related preferences, generating interpretable principles for each cluster using language models, and validating these principles against held-out samples. Empirical evaluation is conducted on the hh-rlhf dataset for training helpful and harmless AI assistants, as well as a synthetic dataset constructed by relabeling hh-rlhf samples with predefined principles. Results demonstrate promising capabilities in clustering semantically coherent topics and generating human-interpretable principles, while also highlighting limitations in achieving fully disentangled, principle-based clustering. Directions for future work are discussed, including soft clustering, bottom-up principle extraction, prompt optimization approaches, and sparse dictionary learning methods. In this work, I argue the following thesis: ICAI shows promise as a strategy to disentangle and explain the preferences represented in preference data. A clustering-based approach to ICAI, though, fails to successfully extract a constitution of principles from preference data, as a result of clustering occurring along the topics in the data instead of the preferences themselves.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156804</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Evaluation of an LLM-Based Tool for Automatically Building Web Applications</title>
<link>https://hdl.handle.net/1721.1/156803</link>
<description>Development and Evaluation of an LLM-Based Tool for Automatically Building Web Applications
Voronin, Diana Nguyen
In this thesis, we present Kodless, a platform that enables users to automatically build web applications from natural language descriptions without requiring them to write, review, or debug the generated code. Kodless structures applications using concept design, a theory which views software as a collection of interacting yet independent units of functionality mapping to human behavior patterns. The platform leverages large language models to generate functional backend code, combining concept design principles with a robust framework for developing concept implementations and integrating them into a standardized application architecture. To evaluate Kodless's performance, we conduct a study in which we use the platform to develop an application through an iterative prompt refinement process. We argue that the case study illustrates the importance of concept-driven prompt engineering and offer guiding principles for designing effective prompts. Furthermore, this thesis contributes improvements to the Kodless platform, including extended support for MongoDB integration and the automatic generation of a frontend testing client. We also introduce a frontend code generation assistant to enable automatic generation of reactive user interfaces. Ultimately, Kodless represents a promising path towards changing how we approach AI driven software design and development.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156803</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Utility Libraries for Traversing and Manipulating Tree-like Data Structures with Varying Schemas</title>
<link>https://hdl.handle.net/1721.1/156802</link>
<description>Utility Libraries for Traversing and Manipulating Tree-like Data Structures with Varying Schemas
Janicki, Adam
Tree-like data structures are very commonly used data types found in the wild in a wide array of projects JavaScript projects. A specific example of one of these structures is an abstract syntax tree (AST). However, the lack of good libraries to handle trees has led to many developers and large-scale code bases having to implement their utility functions over and over again. To address these concerns within the JavaScript developer community, we propose Treecle and Vastly: two free open-source libraries that provide utility functions and operations to help developers work with trees and ASTs respectively.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156802</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Recommendation System for Ideation: Enhancing Supermind Ideator</title>
<link>https://hdl.handle.net/1721.1/156801</link>
<description>A Recommendation System for Ideation: Enhancing Supermind Ideator
Papacica, Daniel
Recommendation systems are widely utilized across various domains such as e-commerce, entertainment, and social media to enhance user experience by personalizing content and suggestions. Despite their widespread use, these systems are rarely applied to the ideation process, presenting unique challenges due to the inherently creative and complex nature of generating and developing novel ideas. This thesis details the creation and assessment of a recommendation system for the Supermind Ideator platform, aimed at enhancing the creative ideation processes. The recommendation system leverages machine learning techniques to dynamically adapt to user input statements based on statement "scope", a sub-task that is thoroughly explored and tested in this paper. "Scope" is then integrated into the recommendation system’s static rules-based algorithm to suggest the next best Supermind Design "move". This work not only contributes a practical tool to the field of ideation but also extends the theoretical understanding of recommendation systems in facilitating complex, subjective cognitive tasks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156801</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Analog Integrated Circuit Design through Large Language Models and Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/156800</link>
<description>Empowering Analog Integrated Circuit Design through Large Language Models and Reinforcement Learning
Terpstra, Irene
Analog Integrated Circuit design consists of several complex steps that are difficult to optimize. Automating the transistor sizing process specifically comes with many challenges. The problem has a large design space, requires complex performance trade-offs, and needs to adjust to rapidly advancing semiconductor technology. As a result, the task of sizing transistors is traditionally performed by experts with years of experience. Various optimization and reinforcement learning methods have been proposed to automate this process. While having shown great competency, these methods must learn complex circuit dynamics from scratch, resulting in black-box solutions. This thesis proposes that the background knowledge contained in Large Language Models (LLMs) can guide the decisions of circuit designers, and that this guidance can be used to improve the exploration efficiency of both mathematical optimizers and reinforcement learning algorithms. This thesis demonstrates that LLMs possess a foundational understanding of analog circuit design including circuit calculation and netlist comprehension. It also built a framework to integrate LLMs as heuristic tools with existing optimization methods. This is a first-of-its-kind exploration into linking LLMs with optimization techniques for analog circuit design. While the current experimental results do not show improvements in design quality or speed, this work establishes the groundwork for further advancements with more sophisticated or fine-tuned LLMs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156800</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherency Loss for Hierarchical Time Series Forecasting</title>
<link>https://hdl.handle.net/1721.1/156799</link>
<description>Coherency Loss for Hierarchical Time Series Forecasting
Hensgen, Michael Lowell
In hierarchical time series forecasting, some series are aggregated from others, producing a known coherency metric between series. We present a new method for enforcing coherency on hierarchical time series forecasts. We propose a new loss function, called Network Coherency Loss, that minimizes the coherency loss of the weight and bias of the final linear layer of a neural network. We compare it against a baseline without coherency and a state of the art method that uses projection to strictly enforce coherency. We find that, by choosing our Network Coherency Loss parameters based on validation data, for four datasets of varying sizes we produce improved accuracy over our two benchmark models. We also find that, when compared to an alternative loss function also designed to produce coherency, our Network Coherency Loss function produces similar accuracies but improves the coherency on the test data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156799</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation Learning for Extrapolation via Bilinear Transduction</title>
<link>https://hdl.handle.net/1721.1/156798</link>
<description>Representation Learning for Extrapolation via Bilinear Transduction
Spiride, Andrei
Typical machine learning systems, such as deep neural networks, perform well at predicting on new examples that come from the same distribution as initial training data. However, these systems are not typically robust to examples that do not come from the same distribution as the training samples. These testing samples are characterized as out-of-distribution (OOD). Using a proven bilinear transduction [1] method for accurately predicting on OOD examples, we propose a method to apply this framework to learned representations instead of hand designed state representations. This work is geared towards enabling the bilinear transduction approach to generalize to a wider range of data types and tasks when such designed representations are not available. We use deep neural networks to learn representations of certain data types, such as images, and apply bilinear transduction to these learned representations. This has the potential to further expand the out-of-support prediction capabilities of the bilinear transduction framework.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156798</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Connecting Deep Learning Models to the Human Brain</title>
<link>https://hdl.handle.net/1721.1/156797</link>
<description>Connecting Deep Learning Models to the Human Brain
Subramaniam, Vighnesh
In this thesis, we introduce innovative methodologies for connecting new deep learning models, particularly models that integrate vision and language with human brain processing. These models have shown remarkable advancements in tasks such as object recognition, scene classification, and language processing, achieving near-human accuracy in some cases. This raises intriguing questions about how closely the computations and geometric structure of these models mirror that of the human brain. Our method starts with measuring brain activity in response to vision and language stimuli and then exposes these stimuli to deep learning models to collect their internal activations. We analyze the similarity between these activations and brain activity using a specific representational distance metric. We focus on introducing statistical algorithms to assess whether one model is significantly more similar with the brain than another. Through our novel methodology, we assess whether there’s a more significant correlation between brain regions and multimodal models compared to unimodal ones. Our investigation reveals brain areas associated with vision-language integration and models of vision-language integration that are potentially most similar to the brain.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156797</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing and Optimizing the Networking Stack in Databases</title>
<link>https://hdl.handle.net/1721.1/156796</link>
<description>Characterizing and Optimizing the Networking Stack in Databases
Kafle, Prabhakar
Databases are latency-critical applications, and client-database communication is a significant contributor to the end-to-end latency. However, the database community has paid little attention to the networking overhead in databases. This thesis focuses on the overhead from the network stack in the server. I characterize the contributions of different components in the database server to the end-to-end latency, focusing on the networking stack. I observe that in transactions involving a single read query, the server network stack accounts for almost 15\% of the total end-to-end latency in VoltDB. Most of this overhead comes from TCP packet processing, interrupt handling, context switches, and I/O multiplexing. Additionally, this work also explores avenues to optimize the networking stack overhead. I find that moving networking to the userspace by bypassing the kernel can significantly reduce the networking stack overhead. This switch in the network stack can help achieve a significant improvement in throughput and lower latency for both the benchmarks used. While the thesis is focused on server networking stack, similar optimization can be applied to client side if necessary hardware (CPU, NIC) is available.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156796</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Language Models to Understand Molecular Structures</title>
<link>https://hdl.handle.net/1721.1/156795</link>
<description>Using Language Models to Understand Molecular Structures
Fan, Vincent K.
In data rich modalities such as text and images, large foundation models have demonstrated remarkable capabilities. However, in life sciences, datasets of comparable scale are prohibitively costly to assemble, pointing towards the imperative need to leverage advances in language modelling to improve machine learning techniques for life sciences. This thesis details research in two such directions, information extraction and text retrieval. Information extraction from chemistry literature is vital for constructing up-to-date reaction databases. Complete extraction requires combining information across text, tables, and figures, whereas prior work has mainly investigated extracting reactions from single modalities. In this thesis, I present OpenChemIE to address this complex challenge and enable the extraction of reaction data at the document level. OpenChemIE approaches the problem in two steps: extracting relevant information from individual modalities with specialized neural models and then integrating the results via chemistry-informed algorithms to obtain a final list of reactions. I meticulously annotated a challenging dataset of reaction schemes with R-groups to evaluate OpenChemIE, which achieves an F1 score of 69.5%. Additionally, the reaction extraction results of OpenChemIE attain an accuracy score of 64.3% when directly compared against the Reaxys chemical database. OpenChemIE is most suited for information extraction on organic chemistry literature, where molecules are generally depicted as planar graphs or written in text and can be consolidated into a SMILES format. Additionally, I detail preliminary research in developing a tool to retrieve full text documents that are relevant to specific protein sequences. I describe the dataset which is currently in construction, as well as experiments pointing at the promise of this approach.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156795</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpreting and Editing Memory in Large Transformer Language Models</title>
<link>https://hdl.handle.net/1721.1/156794</link>
<description>Interpreting and Editing Memory in Large Transformer Language Models
Meng, Kevin
This thesis investigates the mechanisms of factual recall in large language models. We first apply causal interventions to identify neuron activations that are decisive in a model’s factual predictions; surprisingly, we find that factual recall corresponds to a sparse, localizable computation in the MLP weights of the GPT models we study. Harnessing this insight, we then develop methods for efficiently and surgically inserting up to 10,000 new memories into a transformer; these methods perform well in terms of both generalization and specificity. We conclude with some directions for future work.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156794</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analog Underwater Backscatter: Networked Underwater Sensing at Microwatt Power Levels</title>
<link>https://hdl.handle.net/1721.1/156793</link>
<description>Analog Underwater Backscatter: Networked Underwater Sensing at Microwatt Power Levels
Patnaik, Ritik
We present Analog Underwater Backscatter (AUB), the first technology for microwatt-level underwater wireless sensor networks. AUB departs from past underwater backscatter technologies in that it encodes sensor data directly into the physical layer through analog (frequency) modulation. Our design introduces multiple innovations that enable it to address challenges in practical underwater environments arising from mobility (Doppler shift) and the low-frequency carrier, which makes it vulnerable to small hardware imperfections. AUB’s design also introduces the first ultra-low-power wakeup receiver for underwater backscatter, enabling it to operate for a long time on small batteries. We built an end-to-end prototype of AUB and evaluated it in a river. Our evaluation demonstrates that AUB consumes 5.9 µW, 46× lower power than state-of-the-art past underwater backscatter systems. We also demonstrate AUB’s ability to sense two of the most important oceanographic vitals: temperature and depth.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156793</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning an Embedding for Vehicle Telematics</title>
<link>https://hdl.handle.net/1721.1/156792</link>
<description>Learning an Embedding for Vehicle Telematics
Leonard, Matthew
Vehicular telematics involves the collection and processing of data about driving behavior; however, mining and modeling this data is difficult due to its large volume. We hypothesize that the data will follow regular patterns of events that occur during drives, and that we can learn these patterns. With this intuition, we design a neural network that will effectively embed sections of accelerometer data into a lower-dimensional space, with a low loss of information and accuracy of the embedding relative to the dimensionality reduction, as well as several other desirable geometric properties for indexing and analysis of the data. We further develop an accurate summary of the distribution of each drive in this lower-dimensional space, which would serve as a proxy for the occurrence of events within these drives. From this system, we develop a method of comparison between different drives that highlights whether or not particular events occurred in each drive. This could be used to develop a more robust and nuanced risk model, and determine which events in a drive are associated with risk, to provide feedback to end users on their driving.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156792</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Max 2SAT-3, Net, Euclidea: Techniques and Results in Computational Inapproximability</title>
<link>https://hdl.handle.net/1721.1/156791</link>
<description>Max 2SAT-3, Net, Euclidea: Techniques and Results in Computational Inapproximability
Luo, Victor
This Master’s thesis investigates three diverse problem domains through the lens of computational inapproximability: Max 2SAT-3, the Net tile-rotating puzzle family, and the mobile game Euclidea. Max 2SAT-3 is a problem long known to be APX-complete, but finding a clear proof is harder than one might expect. We examine the history of Max 2SAT-3, addressing past misconceptions and clarifying where the reduction chain has been opaque, and present a novel proof of its APX-completeness. Net variants form a wide class of puzzles with lots of potential for future research. We introduce a natural optimization variant of Net and demonstrate its inapproximability, as well as consolidate existing findings and present other new results. Euclidea is a mobile game based on Euclidean straightedge-and-compass constructions. We define the game as an optimization problem and establish its APX-hardness, as well as discuss challenges in upper-bounding its complexity, relating to current knowledge gaps regarding the constructible and algebraic numbers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156791</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling the Rust Compiler to Reason about Fork/Join Parallelism via Tapir</title>
<link>https://hdl.handle.net/1721.1/156790</link>
<description>Enabling the Rust Compiler to Reason about Fork/Join Parallelism via Tapir
Hilton, Jay
Rust + Cilk is an extension to the Rust language incorporating Cilk’s keywords for language level parallelism. The Rust + Cilk compiler leverages the Rust compiler’s static verification of data race freedom and the OpenCilk parallelism platform’s strong theoretical guarantees for performance of parallel programs. I compare Rust + Cilk to existing librarybased parallelism solutions in Rust such as Rayon, as well as to C programs parallelized with OpenCilk, based on performance and ergonomics. I find that Rust + Cilk exhibits marginally worse performance than Rayon, although I expect these differences are possible to bridge with further work. Additionally, Rust + Cilk has ergonomic advantages for some kinds of parallel programs. I outline further research that could make Rust + Cilk a more complete and performant system to further take advantage of the benefits language-based parallelism solutions can offer while statically verifying data race freedom.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156790</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Control Signals for Reconstruction-based Time Series Anomaly Detection</title>
<link>https://hdl.handle.net/1721.1/156789</link>
<description>Modeling Control Signals for Reconstruction-based Time Series Anomaly Detection
Song, Grace Y.
Automated time series anomaly detection methods can provide insights while reducing the load placed on human experts in a variety of settings. Machine-generated signals, such as those produced by sensors, often contains control signals in addition to the target observation signal. These signals may provide additional insight about the normal vs. abnormal properties of the observation signal. Despite this fact, even recent anomaly detection methods using deep learning give limited consideration to the relationship between observation and control signals, often failing to handle the control signal at all. This work proposes pre-processing, modeling, and evaluation methods for multivariate, heterogeneous time series to examine how using information from the control signal can improve anomaly detection. We develop a deep learning reconstruction-based pipeline and test its performance on the NASA Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) Rover, which contains heterogeneous sensing data from exploratory missions. The pipeline follows the Sintel machine learning framework and is accessible through the Meissa library, which builds on the capabilities of the open-source library Orion for end-to-end unsupervised time series anomaly detection pipelines.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156789</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental, geochemical and isotopic insights on melting and degassing behavior on Earth</title>
<link>https://hdl.handle.net/1721.1/156788</link>
<description>Experimental, geochemical and isotopic insights on melting and degassing behavior on Earth
Beaudry, Patrick Claude
This thesis investigates igneous and geothermal processes on Earth through various lenses and at various scales. The processes addressed range from fluid circulation in hydrothermal systems to the formation of the continental crust, onset of subduction, degassing of arc magmas, and mantle melting at mid-ocean ridges. Through this unlikely “medley” of Earth Science problems, parallels are drawn between experiments investigating melting and degassing at high pressure, geochemical and isotopic fingerprints of the Earth’s oldest rocks, and kinetic processes associated with aqueous supercritical fluids. The main lines of investigation rely on the tools of experimental petrology (Chapters 2 and 3) and stable isotope geochemistry (Chapters 1 and 4).  Chapter 1 explores the systematics of methane (CH4) isotopologues in geothermal systems, with a particular focus on the kinetics of isotopic exchange reactions in the vicinity of the supercritical point of water (373°C and 220 bars). This study finds that CH4 isotopologues uniquely record high-temperature processes, given their high closure temperature—i.e. slow equilibration timescales under typical geothermal conditions—combined with the fast timescales associated with supercritical fluids. Chapter 2 describes the development of a new piston cylinder experimental approach to study the solubility and speciation of sulfur (S) in hydrous, oxidized primitive magmas as can be found in subduction zones. High pressure experiments demonstrate the coupled behavior of H2O and S, which mutually interact to fix redox conditions. Exsolution of S-rich fluids is found to play an important role on magmatic redox conditions, with apparent preferential loss of oxidized S to a fluid phase, explaining several natural observations from arc environments. Chapter 3 confirms the primary nature of a high MgO, high Al2O3 mid-ocean ridge basalt (MORB) glass from the ultraslow spreading Southwest Indian Ridge, identifying its multiple saturation boundaries within the plagioclase lherzolite stability field. This finding validates newly developed quantitative petrogenetic models for MORB, with important implications for our understanding of mantle thermal structure and for the origin of primitive glasses globally found at mid-ocean ridges. Chapter 4 describes the multiple S isotope characteristics of a suite of mafic to felsic rocks from the 4.0–2.9 Ga Acasta Gneiss Complex (AGC) from the Northwest Territories, Canada. These help placing constraints on the early Earth S cycle and its relation to tectonic regime. Along with other geochemical indicators, the Acasta rocks appear to record a gradual onset of subduction-like processes, established at least by ~3.3 Ga.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156788</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Mechanistic Interpretability for Neural Networks</title>
<link>https://hdl.handle.net/1721.1/156787</link>
<description>Automated Mechanistic Interpretability for Neural Networks
Liao, Isaac C.
Mechanistic interpretability research aims to deconstruct the underlying algorithms that neural networks use to perform computations, such that we can modify their components, causing them to change behavior in predictable and positive ways. This thesis details three novel methods for automating the interpretation process for neural networks that are too large to manually interpret. Firstly, we detect inherently multidimensional representations of data; we discover that large language models use circular representations to perform modular addition tasks. Secondly, we introduce methods to penalize complexity in neural circuitry; we discover the automatic emergence of interpretable properties such as sparsity, weight tying, and circuit duplication. Last but not least, we apply neural network symmetries to put networks into a simplified normal form, for conversion into human-readable python; we introduce a program synthesis benchmark with this and successfully convert 32 out of 62 of them.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156787</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reed-Relay Switched Tuning Circuit for Stretchable RF Coils in Low Field, Portable MRI</title>
<link>https://hdl.handle.net/1721.1/156786</link>
<description>Reed-Relay Switched Tuning Circuit for Stretchable RF Coils in Low Field, Portable MRI
Nwigwe, Alexandra C.
While MRI (Magnetic Resonance Imaging) technology allows us to get detailed images of the inside of a subject’s body, it most commonly requires very expensive and large-scale machinery which limits the scenarios it can be used in. These types of costly MRI are usually high-field MRI, which operates at magnetic fields of 1.5T and above, and produces images with short scan times and high resolution. Yet because of the drawbacks in accessibility and affordability high-field MRI poses, there has been an effort to devote more research to portable low-field MRI. Low-field MRI opens doors for low-cost and point-of-care imaging but it unfortunately comes at the expense of decreased image quality and greater noise interference. An RF head coil that molds to the user’s head would be able to better excite and receive signal from the subject and counteract some of the inherent disadvantages of low-field MRI. My proposed thesis will pursue the idea of using flexible, subject-adaptable RF head coils in conjunction with an autotuning circuit as a way to extract better signal from a subject at low magnetic fields.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156786</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Algorithms for Mixtures of Linear Dynamical Systems: A Practical Approach</title>
<link>https://hdl.handle.net/1721.1/156785</link>
<description>Learning Algorithms for Mixtures of Linear Dynamical Systems: A Practical Approach
Kumar, Nitin A.
In this work, we give the first implementation of an algorithm to learn a mixture of linear dynamical systems (LDS’s), and an analysis of algorithms to learn a single linear dynamical system. Following the work of Bakshi et al. ([1]), we implement a recent polynomial-time algorithm based on a tensor decomposition with learning guarantees in a general setting, with some simplifications and minor optimizations. Our largest contribution is giving the first expectation-maximization (E-M) algorithm for learning a mixture of LDS’s, and an experimental evaluation against the Tensor Decomposition algorithm. We find that the E-M algorithm performs extremely well, and much better than the Tensor Decomposition algorithm. We analyze performance of these and other algorithms to learn both a single LDS and a mixture of LDS’s under various conditions (such as how much noise is present) and algorithm settings.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156785</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guiding Nonconvex Trajectory Optimization with Hierarchical Graphs of Convex Sets</title>
<link>https://hdl.handle.net/1721.1/156783</link>
<description>Guiding Nonconvex Trajectory Optimization with Hierarchical Graphs of Convex Sets
von Wrangel, David
Collision-free motion planning with trajectory optimization is inherently nonconvex. Some of this nonconvexity is fundamental: the robot might need to make a discrete decision to go left around an obstacle or right around an obstacle. Some of this nonconvexity is potentially more benign: we might want to penalize high-order derivatives of our continuous trajectories in order to encourage smoothness. Recently, Graphs of Convex Sets (GCS) have been applied to trajectory optimization, addressing the fundamental nonconvexity with efficient online optimization over a "roadmap" represented by an approximate convex decomposition of the configuration space. In this thesis, we explore some of the most useful nonconvex costs and constraints and introduce a novel hierarchical GCS structure, composing subgraphs that represent different task phases or alternative paths and enabling efficient planning for complex tasks involving both discrete decision-making and continuous trajectory generation. We investigate the suitability of combining convex "global" optimization using GCS with nonconvex trajectory optimization for rounding the local solutions. Through extensive experiments on diverse robotic systems, we demonstrate that this combination can effectively guide a small number of nonconvex optimizations, ultimately finding high-quality solutions to challenging nonconvex motion planning problems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156783</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Test Suite for Saliency Method Evaluation Metrics</title>
<link>https://hdl.handle.net/1721.1/156781</link>
<description>A Test Suite for Saliency Method Evaluation Metrics
Kaspar, Moulinrouge
This thesis introduces a structured test suite designed to evaluate the input sensitivity of saliency methods, a crucial factor when interpreting machine learning models, particularly in high-stakes environments. Saliency methods, by highlighting essential input features inf luencing model decisions, serve as a key tool for understanding model behavior. Yet, their effectiveness can vary, often presenting challenges in selection due to their inconsistent reliability and the potential for unfaithful representations of model dynamics. To address these challenges, our work enhances the process of selecting and applying saliency methods by rigorously testing their response to input perturbations, from adversarial modifications to minor variations. This test suite specifically assesses aspects such as completeness, deletion, faithfulness, and robustness across various data types—including textual and image data—and model architectures like convolutional and transformer models. We demonstrate the utility of the test suite by using it to compare how different saliency methods, as well as the same method across different architectures, behave under varied conditions. Our findings reveal significant variations in how these methods respond to changes in input data, providing insights that guide users in choosing more reliable techniques for interpreting model decisions. This facilitates a deeper understanding of which methods are best suited for specific tasks and promotes the selection of techniques that enhance the transparency and accountability of AI systems. Ultimately, this thesis contributes to advancing ethical compliance and fostering trust in automated decision-making processes by providing a comprehensive evaluation platform for saliency methods.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156781</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding the Reach of Quantum Enhanced Gravitational-Wave Detectors</title>
<link>https://hdl.handle.net/1721.1/156780</link>
<description>Expanding the Reach of Quantum Enhanced Gravitational-Wave Detectors
Ganapathy, Dhruva
TheAdvancedLIGOdetectorsarethemostprecisedisplacementsensorsevermade,operating at the cutting edge of quantum noise limited sensitivity. The introduction of non-classical squeezed states to reduce quantum shot noise during the third gravitational wave observing run O3 ushered in the era of quantum-enhanced gravitational wave interferometry. This was, however, accompanied by an increase in measurement back-action, in the form of quantum radiation pressure noise which degraded detector sensitivity at low frequencies below 100Hz. In the early 2000s, Kimble et. al. [1] proposed the use of optical filter cavities to prepare frequency dependent squeezed states which circumvent measurement back-action by suppressing radiation pressure noise at low frequencies while continuing to reduce shot noise across the rest of the gravitational wave signal band.&#13;
&#13;
In this thesis, we explore frequency dependent squeezing for gravitational wave detectors, with an emphasis on optimal filter cavity design, and characterization of squeezing in optical systems. We then describe the commissioning of a 300m filter cavity for the first realization of frequency dependent squeezing in gravitational wave interferometer for the fourth gravitational wave observing run O4. Along with significantly enhancing the astrophysical sensitivity of the LIGO detectors, this is also the latest milestone in several decades of research in quantum noise reduction.&#13;
We conclude the thesis by extending frequency dependent squeezing to alternate interferometer configurations by studying the feasibility of detuning the signal cavity of the interferometer to enhance sensitivity to kilohertz signals from neutron star post-mergers.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156780</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Wide-Field Infrared Transient Explorer (WINTER): a new near-infrared time-domain survey</title>
<link>https://hdl.handle.net/1721.1/156779</link>
<description>The Wide-Field Infrared Transient Explorer (WINTER): a new near-infrared time-domain survey
Frostig, Danielle
The Wide-Field Infrared Transient Explorer (WINTER) is a new near-infrared observatory for time-domain astronomy, the study of the evolving night sky. The field has exploded in the last two decades at optical wavelengths, but complementary infrared efforts have been limited by available detector technologies. In this thesis, I present the design, build, and early operations of the new WINTER instrument, which was installed on a dedicated 1-meter robotic telescope at Palomar Observatory in June of 2023. &#13;
&#13;
WINTER’s science goals include robotic follow-up of kilonovae from binary neutron star (BNS) and neutron-star black-hole (NSBH) mergers, surveys to study galactic and extragalactic transients and variables, and building up a deep coadded image of the near-infrared sky. The project also helped develop the world’s largest Indium Gallium Arsenide (InGaAs) detectors for cost-effective near-infrared astronomical imaging without cryogenic cooling. The custom camera combines six InGaAs detectors with a novel tiled fly’s-eye optical design to cover a &gt;1 degree-squared field of view with a 90\% fill factor. WINTER observes in the Y-, J-, and shortened-H-band filters (0.9-1.7 microns), with a filter tray selecting one filter at a time. &#13;
&#13;
The project is a collaboration between MIT and Caltech, with Caltech leading the data reduction pipeline and observatory site management and MIT leading the instrument and facility hardware and operations. This thesis touches upon all aspects of the instrument, highlighting the major subsystems I directed, including detailed instrument design and modeling, project requirements flowdown, kilonova follow-up science simulations, testing of new detectors alongside the development of custom readout firmware and software, stray-light analysis, robotic scheduling software, and on-sky early operations and science.&#13;
&#13;
Since its installation in 2023, WINTER has been operating robotically each night, with ongoing work to improve the instrument. This thesis presents a snapshot of WINTER's progress as of April 2024, concluding with an update on its current performance and future directions for the project.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156779</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Quantum Information to Cosmic Censorship: Emergent Spacetimes and Their Surfaces</title>
<link>https://hdl.handle.net/1721.1/156778</link>
<description>From Quantum Information to Cosmic Censorship: Emergent Spacetimes and Their Surfaces
Folkestad, Åsmund Schiager
In this thesis, we explore classical and semiclassical gravity from the perspective of the AdS/CFT correspondence. We leverage global methods in General Relativity (GR) together with quantum information- and complexity-theoretic properties of the conformal field theory (CFT) dual to obtain novel results in classical and semiclassical gravity.&#13;
&#13;
In the first part, we obtain a collection of results suggesting that holography enforces a refined version of Cosmic Censorship that potentially can replace the Weak Cosmic Censorship (WCC) conjecture, which has been disproven in Anti-de Sitter (AdS) spacetimes. We show that certain important GR results usually proven assuming WCC can instead be derived from consistency of the AdS/CFT dictionary. We also construct new likely violations of WCC in asymptotically AdS₄ spacetimes, but show that these cannot have a holographic dual; this provides evidence that singularities are better behaved in holographic theories, compared to GR with generic matter. Finally, we show a connection between event horizons and CFT pseudorandomness, and we construct a new measure of the size of a naked singularity. We conjecture that quantum gravity only forbids macroscopic naked singularities, according to this measure.&#13;
&#13;
In the second part, we derive new properties of various extremal submanifolds, with several consequences for AdS/CFT. For example, we provide a physically intuitive explanation for why extremal surfaces are natural boundaries between independent subsystems. We also prove results that constrain far-from-equilibrium dynamics in gravity and CFTs.  Finally, we construct a puzzle showing that geometric states with large entanglement need not correspond to a wormhole, highlighting subtleties in the ER=EPR proposal.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156778</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extensible Real-Time Sensor and Test Interface for a System-on-Chip</title>
<link>https://hdl.handle.net/1721.1/156777</link>
<description>Extensible Real-Time Sensor and Test Interface for a System-on-Chip
Studer, Alexandre S.
This thesis describes the development of a printed circuit board (PCB) that enables connecting external sensors and a host computer to a custom Application-Specific Integrated Circuit (ASIC). The ASIC, previously developed by the Low-Energy Autonomy and Navigation research group, is designed for autonomous navigation on microrobots, such as drones. To enable the real-time data processing required for this application, the ASIC includes a custom Sensor-and-Debug IP block that provides Serial Peripheral Interface (SPI) and First-In/First-Out (FIFO) buses. The custom PCB includes a multiplexer circuit that allows multiple sensors to be connected to the ASIC's single SPI bus. It also includes a USB-to-FIFO interface, developed around the RP2040 microcontroller, which enables connecting a host computer to the ASIC's FIFO bus. Ultimately, the PCB simplifies the connection of external sensors, facilitates debugging of the ASIC, and can be miniaturized for mounting on an autonomous microrobot, such as a drone, in the future.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156777</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Control-oriented Meta-learning on Hardware</title>
<link>https://hdl.handle.net/1721.1/156775</link>
<description>Implementing Control-oriented Meta-learning on Hardware
Sohn, Joshua C.
Unpredictable weather conditions pose a daunting challenge for the robust control of unmanned aerial vehicles, also known as drones. The control-oriented meta-learning algorithm aims to solve this problem by learning a controller that can adapt to dynamic environments. This algorithm has already been derived and simulated for a two-dimensional model. This project explores the implementation of the control-oriented meta-learning algorithm on a hardware platform. After extending the algorithm to a three-dimensional model, it was tested in a physics-based simulator and deployed on a hexarotor in the real world. Both in simulation and in real life, the learned controller outperformed a traditional controller in the presence of wind.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156775</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Multistage Compilation of Machine Learning&#13;
Computation Graphs</title>
<link>https://hdl.handle.net/1721.1/156774</link>
<description>Fast Multistage Compilation of Machine Learning&#13;
Computation Graphs
Dighe, Kaustubh
Machine learning applications are increasingly requiring fast and more computational power. Many applications like language models have become so large that they are run on distributed systems in parallel. However, getting into the details of optimally scheduling or even just running machine learning models on distributed systems can be a distraction for researchers ideating models. Hence there has been development of abstractions to facilitate running machine learning models in parallel on distributed systems. We present a compiler for the StreamIt language- a language made for abstract signal processing and multicore programming. We use that abstraction as a way to distribute the computation of machine learning models programmed in PyTorch.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156774</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soccer Last Touch and Automatic Event Detection with Skeletal Tracking Data</title>
<link>https://hdl.handle.net/1721.1/156773</link>
<description>Soccer Last Touch and Automatic Event Detection with Skeletal Tracking Data
Bian, George C.
With the rapid growth of soccer data collection technology worldwide, there has come about an increasing need for new efficient methods to analyze match data. This would help soccer stakeholders more easily and efficiently scrutinize game events for strategy improvement and individual player evaluation. Currently, most existing event data is annotated manually by hand, which is an extremely time-consuming task. Recent works in automatic event generation leverage decision tree algorithms to partially identify game events from player center of mass and ball tracking data, but have shown to be limited in accuracy in practice. New computer vision models have enabled the extraction of player joint data from video broadcast, providing a newer, richer dataset for automatic event detection. The proposed thesis will seek to validate brand-new skeletal joint data, determine the last player to touch the ball at any timestamp during a match, and build a decision tree algorithm for classifying duel-like events and goalkeeping outcomes with the additional context of player joint location.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156773</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Patient Outcomes in the EPOCH Clinical Trial</title>
<link>https://hdl.handle.net/1721.1/156772</link>
<description>Predicting Patient Outcomes in the EPOCH Clinical Trial
Parsan, Nithin
Metastatic colorectal cancer (mCRC) has a poor prognosis and high mortality rate, but innovative therapies such as transarterial radioembolization (TARE) can improve patient outcomes. The EPOCHclinical trial demonstrated that TARE improved hepatic progressionfree survival (hPFS) in patients with colorectal liver metastases, and computational methods to analyze the multimodal data collected can identify patient subgroups and predict treatment response for personalized medicine. First, a comprehensive data preprocessing pipeline curated a high-quality dataset of liver-region Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans paired with patient biomarkers. Multi-Dimensional Subset Scanning (MDSS) identified a group of patients with shared biomarkers that exhibited poor response to TARE, and Cox Proportional Hazards (CoxPH) modeling revealed hazard ratios for biomarkers aligning with clinical expectations, albeit with a limited C-index. Augmenting CoxPH modeling with embeddings from a deep learning foundation model pre-trained on liver CT and MRI scans and fine-tuned to predict treatment response resulted in a substantially higher C-index. Interestingly, models fine-tuned to predict one clinical feature had improved predictive accuracy for other features they were not specifically trained on, and Class Activation Mapping (CAM) visualizations showed that salient embedding dimensions focus on the liver region, providing interpretability. The ensemble of computational techniques applied to multimodal clinical trial data successfully identified patient subgroups, extracted predictive biomarkers, and enhanced the accuracy of treatment response predictions, contributing to the development of more effective, personalized treatment strategies for mCRC patients undergoing TARE.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156772</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extended Evaluation: Unraveling Medicaid Patient Trajectories and Improving Intervention Candidate Identification</title>
<link>https://hdl.handle.net/1721.1/156771</link>
<description>Extended Evaluation: Unraveling Medicaid Patient Trajectories and Improving Intervention Candidate Identification
Joglekar, Natasha
We seek to conduct an analysis of the Camden Coalition’s Health Information Exchange (HIE) data to gain deeper insights into the trajectories of Medicaid patients through the health system. Recognizing the complex challenges of social determinants of health, this study seeks to find patterns and opportunities within the Medicaid population’s healthcare journeys. Through time series analysis we try to understand the utilization trajectories of Medicaid patients over time. Using this insight combined with predictive modeling, we then begin to develop a methodology for identifying persistent high-cost healthcare utilization, and think about how having this information may change program implementation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156771</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Methods for Learning Genetic Dependencies</title>
<link>https://hdl.handle.net/1721.1/156769</link>
<description>Machine Learning Methods for Learning Genetic Dependencies
Cai, Cathy
Synthetic lethality refers to a genetic interaction where the simultaneous perturbation of gene pairs leads to cell death. Synthetically lethal gene pairs (SL pairs) provide a potential avenue for selectively targeting cancer cells based on genetic vulnerabilities. The rise of large-scale gene perturbation screens such as the Cancer Dependency Map (DepMap) offers the opportunity to identify SL pairs automatically using machine learning. We build on a recently developed class of feature learning kernel machines known as Recursive Feature Machines (RFMs) to develop a pipeline for identifying SL pairs based on CRISPR viability data from DepMap. In particular, we first train RFMs to predict viability scores for a given CRISPRgene knockout from cell line embeddings consisting of gene expression and mutation features. After training, RFMs use a statistical operator known as average gradient outer product to provide weights for each feature indicating the importance of each feature in predicting cellular viability. We subsequently apply correlation-based filters to re-weight RFMfeature importances and identify those features that are most indicative of low cellular viability. Our resulting pipeline is computationally efficient, taking under 3 minutes for analyzing all 17,453 knockouts from DepMap for candidate SL pairs. We show that our pipeline more accurately recovers experimentally verified SL pairs than prior approaches. Moreover, our pipeline finds new candidate SL pairs, thereby opening novel avenues for identifying genetic vulnerabilities in cancer.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156769</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aquaculture Basket Detection and Tracking for Autonomous Surface Vehicles</title>
<link>https://hdl.handle.net/1721.1/156768</link>
<description>Aquaculture Basket Detection and Tracking for Autonomous Surface Vehicles
Gillespie, Fiona J.
With the global population on the rise, there is an increased demand for seafood, underscoring the crucial role of aquaculture- the practice of farming aquatic organisms [1]. In the realm of aquaculture, oyster farming is relatively low maintenance, except for the challenge of manually flipping heavy oyster-laden bags. To address this issue, MIT Sea Grant introduced the Oystermaran, an autonomous catamaran specifically designed for this task. This thesis presents contributions to the electronics, controls, and perception systems of the Oystermaran project. In particular, it presents an oyster basket detection and tracking method using the object detector You Only Look Once (YOLO) [2]. In addition, the electronics system has been updated and new manual controllers were created to enable the use of a new f lipping mechanism developed this year. This system is evaluated on data from field testing at Ward Aquafarms, a Cape Cod-based oyster farming business. The results show that oyster baskets can be robustly detected in new environments, despite environmental factors. This marks a significant step towards real-time viability for autonomous oyster farming.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156768</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Supervised Audio-Visual Speech Diarization and Recognition</title>
<link>https://hdl.handle.net/1721.1/156767</link>
<description>Self-Supervised Audio-Visual Speech Diarization and Recognition
Wongprommoon, Arun
Many real world use cases of automatic speech recognition (ASR) contain video and multiple speakers, such as TV broadcasts and video conferences. However, state-of-the-art end-to-end multimodal ASR models generally do not support diarization. This thesis extends one such model, AV-HuBERT, to address the diarization problem while maintaining word recognition accuracy. The proposed Audio-Visual Cocktail (AVC) HuBERT model extends video input dimenions, lengthens feature size, and adds projection layers to split outputs into corresponding speakers. A complementary synthesized dataset is constructed by mixing audio and video samples from LRS3 at varying overlap thresholds, resulting in the LRS3Mix dataset. This is used to train the model, whose weights are transferred from AV-HuBERT. Computing several word error rate (WER) metrics to measure recognition and diarization performance of several versions of AVC-HuBERT models demonstrates that the method improves diarization, albeit with a small tradeoff in word recognition. Augmenting the synthesized mixed dataset with the original clean single-speaker dataset boosts recognition ability, and the same effect can be observed when the dataset size increases.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156767</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using heterogeneous Graph Neural Networks(hGNN) to predict cell-cell communication</title>
<link>https://hdl.handle.net/1721.1/156766</link>
<description>Using heterogeneous Graph Neural Networks(hGNN) to predict cell-cell communication
Yan, Binwei
This thesis investigates diverse computational methodologies for modeling cellular interactions using single-cell RNA sequencing (scRNA-seq) data. We evaluate the performance of Graph Neural Networks (GNNs) both with and without gene-gene edges, Contrastive Learning, and Variational Autoencoders (VAEs) across multiple datasets. Our study compares these methods and establishes benchmarks for assessing their effectiveness beyond traditional case studies. By integrating extensive signaling pathway data, we aim to unveil complex cell-cell communication patterns and regulatory mechanisms that conventional scRNA-seq analysis methods might overlook. Our approach emphasizes the use of spatial data as a crucial indicator, facilitated by the advanced capabilities of heterogeneous GNNs to model physical proximity. We found that our analysis of the functioning genes aligns with previous findings, proving our model’s effectiveness as a potential method for further analyze communication mechanisms.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156766</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Privacy Perserving Payments</title>
<link>https://hdl.handle.net/1721.1/156765</link>
<description>Scaling Privacy Perserving Payments
Ali, Ayesha
We explore privacy-preserving payments in a centralized setting, such as CBDCs. Specifically, we focus on two classes of designs that hide the transaction graph: Chaumian e-cash and Merkle tree-based systems (e.g., Tornado Cash), which differ both in their security assumptions and scalability. In our work we highlight scalability limitations in Merkle tree-based privacy systems that would be encountered in a network as large as a CBDC, and propose a sharded Merkle tree design to improve scalability while maintaining strong privacy. However, as we analyze, conventional sharding methods pose privacy risks, prompting introduction of a ’tree of sharded trees’ design that preserves privacy at a modest increase of latency. We describe, implement and evaluate all three designs, and find that unmodified Tornado Cash indeed suffers from resource-contention induced scalability bottlenecks. In contrast, our new design is achieves throughput that is less than an order of magnitude away from e-cash, despite providing auditability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156765</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>SAGE: Segmenting and Grouping Data Effectively using Large Language Models</title>
<link>https://hdl.handle.net/1721.1/156764</link>
<description>SAGE: Segmenting and Grouping Data Effectively using Large Language Models
Pedraza Pineros, Isabella
Grouping is a technique used to organize data into manageable pieces, reducing cognitive load and enabling users to focus on discovering higher-level insights and generating new questions. However, creating groups remains a challenge, often requiring users to have prior domain knowledge or an understanding of the underlying structure of the data. We introduce SAGE, a novel technique that leverages the knowledge base and pattern recognition abilities of large language models (LLMs) to segment and group data with domainawareness. We instantiate our technique through two structures: bins and highlights; bins are contiguous, non-overlapping ranges that segment a single field into groups; highlights are multi-field intersections of ranges that surface broader groups in the data. We integrate these structures into Olli, an open-source tool that converts data visualizations into accessible, keyboard-navigable textual formats to facilitate a study with 15 blind and low-vision (BLV) participants, recognizing them as experts in assessing agency. Through this study, we evaluate how SAGE impacts a user’s interpretation of data and visualizations, and find our technique provides a rich contextual framework for users to independently scaffold their initial sensemaking process.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156764</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Genetic Basis of Sex Differences in Human Height</title>
<link>https://hdl.handle.net/1721.1/156763</link>
<description>Understanding the Genetic Basis of Sex Differences in Human Height
Aluru, Amulya S.
Sex differences are prevalent across health, development and disease. Driven by the sex chromosomes, the largest source of genetic variation in the human population, trait differences between males and females can have important implications in treatment response and disease diagnosis. Genes along the X and Y chromosomes encode broadly-expressed regulators of the transcriptome and epigenome that have diverged in function and expression. These sex chromosome-linked gene pairs enforce differences in regulatory landscapes and autosomal gene expression patterns between biological males (XY) and females (XX), which can have far-reaching consequences. Despite this, the field of population genetics has rarely considered the special role of sex-linked loci and sex-biased genetic effectors in establishing sex-dependent trait variation.  In this thesis, I integrate existing tools in statistical genetics for the repurposed goal of understanding the genetic basis of sex differences in complex traits. Through combining genome-wide association study (GWAS) data with gene expression panels and sex-biased gene expression information, previous work in the lab has demonstrated that genes with conserved sex bias contribute to the establishment of sex bias in height. First, to understand the relationship between GWAS power and sex differences, we compared the performance of two differently powered GWAS in their ability to explain sex bias in height, finding a modest increase in genetic insight by the larger GWAS. Second, we assessed functional elements across the genome that may differentially contribute to height between males and females to propose alternative mechanisms alongside gene expression that may establish sex differences in height. Altogether, the work presented in this thesis demonstrates the potential of sex differences research to utilize well-powered studies of sex-biased regulators and variant-trait associations to better understand the genetic mechanisms— including, but not limited, to gene expression— that cultivate and maintain sex differences in complex traits.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156763</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monte Carlo Tree Search Applications to Neural Theorem Proving</title>
<link>https://hdl.handle.net/1721.1/156761</link>
<description>Monte Carlo Tree Search Applications to Neural Theorem Proving
LaBelle, Ethan
A common problem of LLM inference is hallucination, where models generate false information. Another such problem is the tradeoff between model size and computational cost. Larger models use more VRAM, in addition to requiring longer training and inference times. This work explores solutions to these problems, namely search and verification, following Yang’s recent contribution: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models. In their work, Yang et al. introduce LeanDojo, an environment for programmatic interaction with the Lean theorem proving language, alongside ReProver, a ByT5-Small transformer-based ATP fine-tuned using the open source Lean mathlib. The smaller model requires fewer resources, enabling faster inference, which when combined with search, improves the effective performance of the model. We use the language model to generate a space of partial proof trees in Lean. As the core GPT can be interchanged with a larger or more performant model, this work focuses on search algorithms for finding novel proofs given the same computational budget. Three classes of algorithms are explored: best first search, random walk, and Monte Carlo Tree Search. Search algorithms are evaluated on the random split test dataset of the LeanDojo Benchmark. Finally, we present common failure modes of various methods, search results of algorithm variants, and novel proofs discovered relative to the baseline. Across our trials, we show the search space defined by ReProver’s tactic generator contains proofs for approximately 55.0% of theorems in the LeanDojo Benchmark random test split. In Yang’s evaluations, ReProver achieves a 51.2% solve rate Pass@1 on this benchmark.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156761</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Changes in Individual Wellbeing Scores: Mixed Effects Models using Sleep Data from Wearables</title>
<link>https://hdl.handle.net/1721.1/156760</link>
<description>Predicting Changes in Individual Wellbeing Scores: Mixed Effects Models using Sleep Data from Wearables
Choi, Shelley
Sleep plays a major role in regulating human cognitive function, performance, mood, and well-being. Despite its significance, the intricate relationship between various sleep components—such as duration, quality, and regularity—and wellbeing outcomes remains inadequately explored. The nature of sleep data poses challenges in capturing and interpreting temporal patterns, but the growing popularity of wearable devices capable of collecting vast multi-modal data presents a promising avenue to bridge this gap. In this thesis, the aim is two-fold: first, identify the impact of different combinations and transformations of sleep regularity (Sleep Regularity Index- SRI, Composite Phase Deviation- CPD, Interdaily Stability- IS) and duration calculated from wearable devices across varying time frames on self-reported morning wellbeing scores (alertness, happiness, energy, health, calmness); and second, evaluate both linear and nonlinear associations between different sleep metrics and wellbeing. To address high user variability found by the personalized nature of sleep and the subjective nature of wellbeing assessments, we employ mixed effects modeling techniques where each individual is treated as their own cluster, including Linear Mixed Effects models (LMM) and Mixed Effects Random Forest (MERF), where the latter is benchmarked against classic machine learning models. The LMM results were most statistically significant for independent regularity (SRI, IS), combined regularity (SRI and IS), total sleep time as duration (TST), and combined regularity and total sleep time (SRI and TST, IS and TST) for alertness and energy over 2-4 nights. MERF outperformed other models in Mean Absolute Error (MAE), for all time split scenarios. This research further emphasizes the importance of addressing data leakage due to the time sensitivity of sleep data and calculation of regularity spanning multiple days. Bye stablishing correlations between sleep parameters and wellbeing indicators, this study hopes to provide deeper insights into fluctuations in wellbeing and inform the development of wearables that monitor sleep patterns.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156760</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast spectroscopy and control of correlated quantum materials</title>
<link>https://hdl.handle.net/1721.1/156759</link>
<description>Ultrafast spectroscopy and control of correlated quantum materials
Fichera, Bryan T.
In this thesis, I describe research completed during my Ph.D. on correlated condensed matter systems using ultrafast optics. I begin with a broad overview of this field, focusing specifically on the essential physics involved in ultrafast processes and how that physics may be utilized, in the sense of either spectroscopy or control, to understand correlated systems. I then give a pedagogical introduction to second harmonic generation, both in theory and in practice, before describing results from four projects I completed in my Ph.D.—(i) a technical project concerned with automating polarization rotation in second harmonic generation, (ii), a demonstration that second harmonic generation may be used to differentiate charge density wave domains with opposite planar chirality, (iii) our discovery of an ultrafast reorientation transition in the antiferromagnetic semiconductor CaMn₂Bi₂, and (iv) second harmonic generation evidence for an amplitude-mode electromagnon in CuBr₂. I conclude by reflecting on the progress achieved in correlated electron physics as a result of this work, and by giving my own perspective on the future of this field.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156759</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gyroscropes Orbiting Gargantuan Black Holes: Spinning Secondaries in Extreme Mass Ratio Inspirals</title>
<link>https://hdl.handle.net/1721.1/156758</link>
<description>Gyroscropes Orbiting Gargantuan Black Holes: Spinning Secondaries in Extreme Mass Ratio Inspirals
Drummond, Lisa V.
Large mass ratio binary black hole systems are essential for studying the two-body problem in general relativity and are key sources of low-frequency gravitational waves. These sources will be detectable by the Laser Interferometer Space Antenna (LISA), which is a planned space-based gravitational-wave observatory. At lowest order, the secondary body (smaller black hole) follows a geodesic of the more massive black hole's spacetime. Post-geodesic effects are needed to model the system accurately. Failure to incorporate these effects can introduce bias in tests of general relativity and compromise precision measurement of the larger black hole's properties. One very important post-geodesic effect is the gravitational self-force, which describes the small body's interaction with its own contribution to a binary's spacetime and includes the backreaction of gravitational-wave emission driving inspiral. Another post-geodesic effect, the spin-curvature force, is due to the smaller body's spin coupling to spacetime curvature. Exploiting the large mass-ratio approximation, this thesis presents a suite of mathematical and computational tools for precisely calculating bound orbits and inspiral of spinning bodies around rotating black holes. &#13;
&#13;
In Chapters 3 and 4, we employ a frequency-domain formulation to describe completely general orbits of spinning bodies in curved spacetime. The small body's spin influences orbital frequencies and accumulated phases which are direct gravitational-wave observables. In Chapter 5, we combine the leading orbit-averaged backreaction of point-particle gravitational-wave emission with the spin-curvature force to construct the trajectory and associated gravitational waveform of a spinning body inspiraling into a Kerr black hole. To achieve this, we use a near-identity transformation (NIT) to rapidly compute trajectories for generic orbit and spin configurations. This efficiency is essential for the high-dimensional, long-duration waveforms of large mass-ratio binary systems.  In Chapter 6, we describe how the framework of Chapters 3 and 4 can be used to generate gravitational wave fluxes for spinning bodies on completely generic orbits and discuss a ``shifted geodesic'' approximation scheme which could speed up the evaluation of these fluxes. This thesis introduces methods for accurately modeling completely general orbits of spinning bodies in large mass ratio binary black hole systems, enhancing gravitational-wave models for the LISA science program, and introducing a limit that can be computed precisely as a benchmark for calculations across all mass ratios.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156758</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Code Summarization and Program Synthesis with Large Language Models</title>
<link>https://hdl.handle.net/1721.1/156757</link>
<description>Code Summarization and Program Synthesis with Large Language Models
Lam, Kelly
Automatic source code summarization and generation are naturally complimentary operations because they bridge the gap between natural-language text and executable programs, allowing users to flow between the two modes. Even though large language models, have become increasingly popular, it is unclear how effective they are with code summarization and generation, especially as we examine longer source code segments or more complicated prompts for generation. In this thesis, we will formalize the automatic code summarization and generation problems, identify some cases where large-language models can perform poorly, propose some techniques to correct the initial bad results, and evaluate our results against appropriate baselines using suitable evaluation metrics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156757</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Spatial Transcriptomics Data for Cross-Species Molecular Region Comparison</title>
<link>https://hdl.handle.net/1721.1/156756</link>
<description>Integrating Spatial Transcriptomics Data for Cross-Species Molecular Region Comparison
Li, Bridget
Comparative analysis of brain patterns across species can advance understanding of different biological processes and functions. Spatially resolved transcriptomics (SRT) technologies present the ability to measure gene expression of single cells within tissues, enabling the detection of unique spatial molecular patterns in the brain. Several computational methods that rely on cellular neighborhood information have been developed for characterizing molecular tissue regions in SRT data. Here, we show that spatial integration (SPIN) improves the performance of existing methods and enables the clustering of molecular tissue regions. Then, we test SPIN and signal-processing approaches on SRT data from mouse and macaque brains. We integrate the brain atlases of these two species to identify shared and distinct spatial molecular patterns. This work offers new insights into spatial molecular features between mouse and macaque brains and proposes a framework for integrating SRT datasets on a large scale.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156756</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Algorithmic Progress in Data Structures and Approximation Algorithms</title>
<link>https://hdl.handle.net/1721.1/156755</link>
<description>On Algorithmic Progress in Data Structures and Approximation Algorithms
Li, Jeffery
In the big data regime, computer systems and algorithms must process large amounts of data, making many traditional exact algorithms too costly to run. To work around this, researchers have developed approximation algorithms, which trade off some accuracy for asymptotic improvements in runtime, and data structures, which can efficiently store and answer multiple queries about a dataset. This naturally leads to the question, how have approximation algorithms and data structures improved over the years? Here, we provide some insight into this question, looking into trends in algorithmic and data structure progress, tradeoffs between speed and accuracy or between runtimes of specific data structure operations, and specific problems of interest. Our analysis is based on a dataset of around 300 approximation algorithms and around 250 data structures. For both fields, we find that research is still fairly active even to the present day, even though significant or asymptotic gains for data structures have been slowly on the decline. Improvements have also been fairly heterogeneous– some problems see a lot of work and improvements put into them, while others have not seen as much progress. In addition, of the problems that have both exact and approximation algorithms, around 1/6 of the problems have seen approximation algorithms have immensely large average yearly improvement rates compared to exact algorithms, while around 1/2 of the problems have seen approximation algorithms have minimal improvement over exact algorithms. For data structures, we find that only 4 out of the 28 abstract data types in our dataset have ever had a tradeoff between storage requirements and/or runtimes of specific operations, with only 2 still existing in the present, suggesting that improvements generally build off of each other without increasing space usage or time required for other operations. This research helps us understand how approximation algorithms and data structures have progressed through the years and how they are now.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156755</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving LLM Long Context Understanding via Synthetic Data and Adaptive Compression</title>
<link>https://hdl.handle.net/1721.1/156754</link>
<description>Improving LLM Long Context Understanding via Synthetic Data and Adaptive Compression
Li, Jerry
Recent innovations in large language models (LLMs) have led to their widespread use, but the long context problem remains a fundamental challenge. Transformer-based LLMs are constrained by the quadratic scaling of the self-attention mechanism, which restricts most popular LLMs to a context length of several thousand tokens. Many methods have been introduced to extend the context of LLMs, including the Activation Beacon approach. In this work, we propose two key advancements to the existing methodology. First, we generate long context synthetic data across a variety of tasks for training context-extended models, which can supplement or even replace expensive human-annotated data. Second, we introduce a novel two-pass, adaptive compression technique for more intelligent compression of long contexts. We find that the two strategies lead to orthogonal performance improvements on real-world long context tasks, resulting in an overall 4.2% increase in accuracy compared to the previous benchmark.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156754</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Irreversible Actions in Assistance Games with a Dynamic Goal</title>
<link>https://hdl.handle.net/1721.1/156753</link>
<description>Irreversible Actions in Assistance Games with a Dynamic Goal
Mayer, Hendrik T.
Reinforcement Learning (RL) agents optimize reward functions to learn desirable policies in a variety of important real-world applications such as self-driving cars and recommender systems. However, in practice, it can be very difficult to specify the correct reward function for a complex problem, in what is known as reward misspecifcation. Impact measures provide metrics to determine how robust a particular agent’s behavior is to reward misspecification. This thesis analyzes one particular impact measure: the frequency of irreversible actions that an agent takes. We study this impact measure using a time-varying model of the principal’s preferences. This choice was motivated by two primary considerations. First, many real-world scenarios consist of a principal with time-varying preferences. Second, an agent assuming time-varying preferences may be more averse to performing irreversible actions. In this thesis, we examine principal-agent (human-robot) assistance games in toy grid environments inspired by cooperative inverse reinforcement learning [1], where irreversible actions correspond to removing transitions from a POMDP. In these games, we focus on how the frequency of changes in the principal’s preferences and the optimality of the principal influence the agent’s willingness to take irreversible actions. In 2-node and 4-node assistance games, we find two main results. First, in the presence of a random or approximately optimal human, the robot performs more irreversible actions as the goal state changes position more often. Second, in the presence of an optimal human, the robot rarely performs irreversible actions.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156753</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>ExoBLAS: Meta-Programming a High-Performance BLAS via Scheduling Automations</title>
<link>https://hdl.handle.net/1721.1/156752</link>
<description>ExoBLAS: Meta-Programming a High-Performance BLAS via Scheduling Automations
Droubi, Samir
Kernel libraries are designed to support numerical computations and provide efficient implementations of them. The goal of these libraries is to provide many optimized functionalities, which is a challenge when the implementations of those programs are often written in C or assembly. BLAS (Basic Linear Algebra Subprograms) is a famous example of such libraries where the dimensionality of the interface imposes a huge space of functions to implement, which makes it particularly challenging to support. Our work tackles the problem of implementing BLAS in the context of meta-programming, particularly user-scheduling in the Exo programming language. We base our solution on three key ideas to achieve reuse at the level of the meta-program. First, there are similarities in the individual optimizations that are performed on these kernels, which we capture as scheduling operations with which we extend the Exo programming language. Secondly, the end-to-end optimization strategies (or schedules) for groups of these kernels are the same, and we capture them as scheduling automations. Lastly, more complex BLAS operations from higher levels can be transformed into less complex BLAS-like operations similar to operations from lower levels, so we can use the automation of a lower level to build the automation of a higher level. We evaluated our results against industry and open source implementations of BLAS and show that we achieve competitive performance with a small implementation in terms of lines of code.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156752</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Memorization: Exploring the Dynamics of Grokking in Sparse Neural Networks</title>
<link>https://hdl.handle.net/1721.1/156751</link>
<description>Beyond Memorization: Exploring the Dynamics of Grokking in Sparse Neural Networks
Fuangkawinsombut, Siwakorn
In the domain of machine learning, "grokking" is a phenomenon where neural network models demonstrate a sudden improvement in generalization, distinct from traditional learning phases, long after the initial training appears complete. This behavior was first identified by Power et al. (2022) [5]. This thesis explores grokking within the context of the (&#119899;, &#119896;)-parity problem, aiming to uncover the mechanisms that trigger such transitions. Through extensive empirical research, we examine how different neural network configurations and training conditions influence the onset of grokking. Our methodology integrates advanced visualization techniques, such as t-SNE, and kernel density estimations to track the evolution from memorization to generalization phases. Furthermore, we investigate the roles of weight decay and network robustness against outliers, focusing on optimizing neural network architectures to achieve effective generalization with fewer computational resources. This study advances our understanding of grokking and proposes practical strategies for designing more efficient neural networks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156751</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automating Accountability Mechanisms in the Judiciary System using Large Language Models</title>
<link>https://hdl.handle.net/1721.1/156750</link>
<description>Automating Accountability Mechanisms in the Judiciary System using Large Language Models
Shastri, Ishana
Holding the judicial system accountable often demands extensive effort from auditors who must meticulously sift through numerous disorganized legal case files to detect patterns of bias and systemic errors. For example, the high-profile investigation into the Curtis Flowers case took nine reporters a full year to assemble evidence about the prosecutor’s history of selecting racially-biased juries. Large Language Models (LLMs) have the potential to automate and scale these accountability pipelines, especially given their demonstrated capabilities in both structured and unstructured document retrieval tasks. We present the first work elaborating on the opportunities and challenges of using LLMs to provide accountability in two legal domains: bias in jury selection for criminal trials and housing eviction cases. We find that while LLMs are well-suited for information extraction from eviction forms that have more structure, court transcripts present a unique challenge due to disfluencies in transcribed speech.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156750</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Topology for Capacitively Isolated Switched Capacitor Converter</title>
<link>https://hdl.handle.net/1721.1/156749</link>
<description>Novel Topology for Capacitively Isolated Switched Capacitor Converter
Jerez, Raiphy
This thesis introduces a novel topology for capacitive isolation in switched-capacitor DCDC converters, taking inspiration from previous work1. The research endeavors to develop a unique switched-capacitor topology that enables isolation between input and output voltages. By integrating elements of the Cockcroft-Walton generator into the Dickson converter framework, the proposed design seeks to leverage the inherent advantages of switched-capacitor converters—such as compactness, lightweight design, and higher efficiency at low to moderate power levels—over traditional magnetic converters. Additionally, the incorporation of isolation in the switched-capacitor converter architecture offers enhanced flexibility, allowing for selective power processing and more precise regulation. This feature is particularly beneficial in applications requiring dynamic power management and improved efficiency in power conversion.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156749</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic Interpretability for Progress Towards Quantitative AI Safety</title>
<link>https://hdl.handle.net/1721.1/156748</link>
<description>Mechanistic Interpretability for Progress Towards Quantitative AI Safety
Lad, Vedang K.
In this thesis, we conduct a detailed investigation into the dynamics of neural networks, focusing on two key areas: inference stages in large language models (LLMs) and novel program synthesis methods using mechanistic interpretability. We explore the robustness of LLMs through layer-level interventions such as zero-ablations and layer swapping, revealing that these models maintain high accuracy despite perturbations. As a result, we hypothesize the stages of inference in LLMs. This work suggests implications for LLM dataset curation, model optimization, and quantization. Subsequently, we introduce MIPS, an innovative method for program synthesis that distills the operational logic of neural networks into executable Python code. By transforming an RNN into a finite state machine and applying symbolic regression, MIPS successfully addresses 32 out of 62 algorithmic tasks, outperforming GPT-4 in 13 unique challenges. The work intends to take a step forward in enhancing the interpretability and reliability of AI systems, promising significant advances in our understanding and utilization of current and future AI capabilities. Together, these studies highlight the importance of comprehending the inferential behaviors of neural networks to foster more interpretable and efficient AI.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156748</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steerable Alignment with Conditional Multiobjective Preference Optimization</title>
<link>https://hdl.handle.net/1721.1/156747</link>
<description>Steerable Alignment with Conditional Multiobjective Preference Optimization
Manyika, Julian
As the scale, capabilities and use-cases of large language models (LLMs) continue to grow, it is imperative that these systems are aligned with human preferences. Current state of the art strategies for alignment such as Reinforcement Learning from Human Feedback (RLHF) have provided useful paradigms for finetuning LLMs to produce outputs that are more consistent with human preferences. These approaches, however, assume that preferences are formed by a single, underlying reward model, which is likely insufficient for representing an individual’s preferences, certainly unable to represent diverse group preferences, and inf lexible for users at inference time. To address these limitations, we propose Conditional Multiobjective Preference Optimization (CMPO), a novel alignment strategy that trains a user-steerable LLM along multiple attributes of text, such as helpfulness and humor. CMPO simulates the pareto front of multiple single-attribute preference-optimized models through structural plurality and finetuning with Direct Preference Optimzation (DPO), and allows users to condition outputs on the predefined attributes at inference-time. Experiments show that CMPO generates responses that are preferred to those from separate attribute-specific DPO models and from models trained using SteerLM, a alternate model steering approach. CMPO empirically shows promise as a scalable and flexible finetuning strategy for creating LLMs that are attribute-steerable from parameterized preferences.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156747</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying Hardware Security Modules With True Random NumberGenerators</title>
<link>https://hdl.handle.net/1721.1/156746</link>
<description>Verifying Hardware Security Modules With True Random NumberGenerators
Zhao, Katherine
Hardware security modules (HSMs) are powerful tools in building secure computer systems, allowing developers to factor out security-critical code to separate devices. Because HSMs usually work with sensitive data, it is crucial that we are able to verify that they are secure. Many HSMs today also include true random number generators (TRNGs) as part of their architecture to seed cryptographic functions for generating keys, creating nonces, padding, and more. This thesis presents a definition of Information-Preserving Refinement with Randomness (IPRR) that captures the idea that a HSM with a TRNG is correct and is secure from timing side channel attacks. We additionally construct a strategy to prove IPRR, and develop Karatroc, a tool for verifying that a HSM satisfies IPRR. Through the creation and evaluation of Karatroc, we demonstrate the ability to verify HSMs with TRNGs without incurring significant added cost in performance and proof length as compared to existing proof methods.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156746</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Standardization of Electronic Component Datasheets to Improve Systematic Data Extraction</title>
<link>https://hdl.handle.net/1721.1/156745</link>
<description>Standardization of Electronic Component Datasheets to Improve Systematic Data Extraction
Gustafson, Nicholas F.
This thesis addresses the challenge of standardizing electronic component datasheets to improve systematic data extraction. The absence of uniformity in datasheet design complicates the process of systematically extracting critical information, leading to significant manual effort and potential errors. This research explores the current state of datasheet standardization and examines existing systematic data extraction efforts from semi-structured documents. It highlights the limitations of current methods and emphasizes the need for further standardization to facilitate accurate and efficient data extraction. The thesis proposes a detailed methodology for transitioning electronic component datasheets from semistructured to structured formats through standardization. By defining common standards and specific structures for different types of datasheets, this approach aims to enhance both human readability and machine processing. The thesis concludes by discussing the broader implications of these standards and their potential applications in other fields. Through this work, the goal is to streamline the datasheet creation process, reduce manual intervention, and ultimately improve the accuracy and efficiency of systematic data extraction in the electronic components industry.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156745</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hybrid Switched-Capacitor Converter for Capacitive Wireless Power Transfer in Biomedical Applications</title>
<link>https://hdl.handle.net/1721.1/156744</link>
<description>A Hybrid Switched-Capacitor Converter for Capacitive Wireless Power Transfer in Biomedical Applications
Sund, Jade
On market rechargeable pulse generators, use inductive wireless power transfer (I-WPT), but capacitive wireless power transfer (C-WPT) has the potential to provide safety and size improvements over I-WPT. Current C-WPT research is focused on resonant capacitive coupling methods. Such works have reported power transfer efficiency of less than 40%. In the proposed thesis, a capacitively isolated Dickson converter, a type of hybrid switched capacitor converter, will be investigated to determine if it can be used to safely, efficiently, and in a small package deliver power to biomedical implants.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156744</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing 3D Scene Graph Generation with Multimodal Embeddings</title>
<link>https://hdl.handle.net/1721.1/156743</link>
<description>Enhancing 3D Scene Graph Generation with Multimodal Embeddings
Morales, Joseph
3D Scene Graphs are expressive map representations for scene understanding in robotics and computer vision. Current approaches for automated zero-shot 3D Scene Graph generation rely on spatial ontologies that relate objects with the semantic locations they are found in (e.g., a fork is found in a kitchen). While conferring impressive zero-shot performance, these approaches are conditioned on the existence of disambiguating objects in a scene, the expressiveness of the generated spatial ontologies, and knowing during data collection that a robot needs to observe specific objects in the environment. This thesis proposes a method for zero-shot scene graph generation by leveraging Vision-Language Models (VLMs) to construct a layer of Viewpoints in the scene graph, which allow for after-the-fact open-vocabulary querying over the scene. Methods for utilizing different VLM features are explored, which result in improvement over the ontological approach on region segmentation tasks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156743</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inductive Biases in Learning Hierarchical Abstractions for Bipedal Locomotion</title>
<link>https://hdl.handle.net/1721.1/156742</link>
<description>Inductive Biases in Learning Hierarchical Abstractions for Bipedal Locomotion
Ravichandar, Sanjna
Bipedal locomotion presents a complex challenge in the field of reinforcement learning (RL), due to the high dimensional state and action space. Hierarchical abstractions and inductive biases emerge as critical components in navigating this complexity, offering pathways for effective learning and adaptation in bipedal locomotion tasks. By leveraging hierarchical structures and inductive biases, RL controllers can distill the inherent complexity of bipedal locomotion into manageable components, facilitating more efficient learning and adaptation processes. This work explores hierarchical abstractions within the context of RL for bipedal locomotion. We investigate three distinct RL locomotion controllers: a baseline controller, an action space abstraction controller, and a novel Hierarchical RL (HRL) controller implemented on velocity tracking tasks. We assess the controllers across various RL metrics, including task performance, learning efficiency, stability, and human-likeness metrics derived from human locomotion studies. We quantify the effectiveness of hierarchical abstractions and inductive biases in enhancing locomotion task performance and aligning RL-generated behaviors with human locomotion patterns. The action space abstraction controller emerges with superior performance, and our investigation underscores the potential of HRL approaches, indicative of its ability to leverage hierarchical structures for optimized locomotion behaviors and highlights the importance of selecting appropriate and well-designed abstractions. By analyzing the role of hierarchical abstractions and inductive biases in bipedal RL, our study contributes to advancing the understanding and development of RL algorithms for bipedal locomotion, with implications for the design of more efficient and human-like locomotion behaviors in robotic systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156742</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Twofish: Automatic Edit Cascading for Diagrams</title>
<link>https://hdl.handle.net/1721.1/156741</link>
<description>Twofish: Automatic Edit Cascading for Diagrams
Huang, Grace
Creating and editing diagrams, whether for scientific research, education, or otherwise, is tedious and time consuming. When a user makes a small change to a diagram element, they often have to make additional downstream edits to fully propagate the change to the diagram. This is because these relative positioning constraints are often defined through layout commands, such as alignment, which are viewed by many direct manipulation editors as one-time operations. That is, a layout command enforces spatial relationships between objects by mutating them but does not enforce these relationships when the user makes later edits. While viewing these commands as one-time operations improves the editing flexibility of the editor, it makes editing less efficient. To balance the tradeoff between editing flexibility and efficiency, we present Twofish, a graphical editor that persists relations between elements. In this context, relations, such as alignment or an arrow, associate elements with each other by defining relative spacing constraints between them. Through persisting these relations, we can reapply them automatically to the diagram when corresponding elements are edited. This allows Twofish to automatically cascade edits downstream to fix any positioning constraints that were broken because of a change. This system is built as an extension of an existing graphical editor. In doing so, Twofish makes it easier to create and edit diagrams without sacrificing expressibility. To evaluate Twofish, we compared using Twofish and Figma to edit diagrams in six different scenarios, using three example diagrams. From this comparison, we found that Twofish generally improved editing efficiency but had worse editing flexibility than Figma.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156741</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Grit in MLB Batters</title>
<link>https://hdl.handle.net/1721.1/156740</link>
<description>Quantifying Grit in MLB Batters
Yang, Angel
This thesis investigates the quantification of grit in Major League Baseball (MLB) batters, a crucial yet underexplored area in sports analytics traditionally gauged through qualitative assessment. Utilizing 2023 game data from the top 160 most utilized MLB batters, this study develops a Grit Score for each player based on the number of at-bats required to return to average performance after a period of below-average performance. At-bat performance is measured through Delta Runs Expected, and the at-bat group size of the window is selected by testing for correlation and consistency in player grit rankings. Results reveal significant variations in Grit Scores among batters; players identified as the most gritty generally correspond to those with top offensive performance, though grit and performance do not perfectly correlate. Furthermore, gritty batters tend to experience a higher number of hitting slumps but with shorter average lengths, regardless of the at-bat group size used to define the performance window. This research has implications in player valuation and development, team management, and scouting and drafting, suggesting that MLB teams should favor players who recover quickly from poor at-bats due to their more consistent performance and reliable offensive contributions to team success.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156740</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Informing decision-making in single-objective, mixed-variable design problems</title>
<link>https://hdl.handle.net/1721.1/156739</link>
<description>Informing decision-making in single-objective, mixed-variable design problems
Fang, Demi L.
Data-driven decision-making in mixed-variable design problems presents a variety of challenges and opportunities, especially in the increasingly data-rich field of emissions in architectural and structural design. Designers can benefit from an underlying knowledge about, for example, whether material choice (discrete) or span (continuous) have more important consequences on structural emissions. This intuition need not be built purely through experience nor optimization: data-driven approaches can offer quantitative feedback. However, traditional approaches of sensitivity analysis are limited to continuous variables, while certain types of machine learning models can handle combinations of continuous and discrete variables. In this thesis, a hybrid gradient-based, sampling-based technique determining the directional importance of mixed variables in a design space is benchmarked against state-of-the-art variable importance methods (also known as feature importance or interpretability) from machine learning. The importance evaluations and runtimes are compared across workflows.  First, a concise literature review is presented, clarifying and unifying terminology across fields. Tree-based models are identified as a machine learning model that readily handles mixed-variable design spaces, and the following variable importance metrics are identified: impurity-based importance metrics (also known as Mean Decrease Impurity), permutation feature importance (PFI, also known as Mean Decrease Accuracy), and Shapley values. These existing workflows are applied to varying sample sizes of three different datasets related to the application to low-carbon structural design. The same samples are evaluated using the hybrid technique previously proposed by the author, which trains the data on a conditional variational autoencoder (cVAE), approximates gradients on the model, and summarizes gradients into “influence metrics” using a Gaussian mixture model (GMM) (in contrast to a mean absolute value).  Through this comparison, this thesis establishes several findings, including several advantages to using the hybrid cVAE and GMM-to-influence workflow over typical tree-based feature importance approaches. First, the hybrid method’s evaluation of gradients is consistently faster than the evaluation of importance in all other workflows for all sample sizes and datasets. Secondly, it avoids the known drawback of tree-based models’ tendency to assign higher importance to high-cardinality variables. Third, its definition of performance “gradients” with respect to each category (as opposed to each categorical variable) offers more specific, useful insights. For example, it is more useful to know which structural framing system is associated with large reductions in emissions (gradients by category) than to know that the choice of structural framing system is associated with a range of reductions and increases in emissions (gradients by categorical variable, which is typical in feature importance methods). These advantages come at the expense of more time (in this case, 10-fold) needed to train the model compared to state-of-the-art gradient-boosted tree models and the additional time needed to fit a GMM (as opposed to taking the mean absolute value of importance values across the sample). The hybrid workflow is still 2 to 10 times faster than the random forest workflows. Finally, these comparisons highlight the importance of cardinality of categorical variables in mixed variable design spaces, both in the process of selecting a model and selecting an importance evaluation method. &#13;
Key words: variable importance, feature importance, mixed-variable design spaces, gradients, design space exploration, data-driven decision-making
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156739</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hybrid Approach for Key-Value Extraction from Technical Specification Documents</title>
<link>https://hdl.handle.net/1721.1/156738</link>
<description>A Hybrid Approach for Key-Value Extraction from Technical Specification Documents
Lee, Samuel S.
As the number of documents processed by businesses across the world increases daily, the demand for streamlined and automated document processing methods grows. However, commercial methods for information extraction from documents do not generalize well across different document formats, as each solution is tailored to specific types of documents. This thesis provides an overview of a hybrid document processing pipeline designed to extract key-value pairs from technical specification documents with high accuracy. Two different phases of the pipeline are introduced, both employing rule-based methods and machine learning to cover a variety of document types. The first is an earlier iteration that extracts information from a simpler collection of documents, and the second is the current iteration designed to handle a much larger dataset containing more complex documents. Lastly, the initial stages of a module designed for key-value extraction from a specific type of technical specification document is also proposed.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156738</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport Properties of Divertor Edge Plasmas Measured with Multi-Spectral Imaging</title>
<link>https://hdl.handle.net/1721.1/156659</link>
<description>Transport Properties of Divertor Edge Plasmas Measured with Multi-Spectral Imaging
Linehan, Bryan Lee
The transport of heat and particles in the boundary of a tokamak is not sufficiently understood for the purposes of constructing a pilot nuclear reactor. Improving numerical and theoretical understanding is inhibited by traditional boundary diagnostics that provide sparse and inflexible spatial coverage. In this thesis, multi-spectral imaging of helium line ratios (HeMSI) was used to create 2D poloidal maps of Tₑ and nₑ in the TCV divertor. These are the first plasma boundary measurements to provide continuous 2D coverage of Tₑ and nₑ for arbitrary magnetic geometries. These measurements were validated against co-local Thomson scattering measurements in diverted plasmas. HeMSI showed good agreement with Thomson scattering in the common flux region(CFR) of ionizing plasma for both majority helium and majority deuterium plasmas. Having validated this powerful new tool, HeMSI was used to investigate the effects of flux expansion in the TCV divertor for plasmas in the conduction limited regime. Increasing poloidal flux expansion is expected to lower the temperature of the divertor target by increasing the plasma volume and connection length of the magnetic field line between the core and target. These benefits are observed in the conduction limited regime but not in the partially detached regime. The 2D poloidal maps of Tₑ and nₑ, in concert with other measurements, were used to calculate the ionization rate of He and D, the E × B drift velocity, Spitzer heat conduction, and parallel flow in 2D. This allowed for heat transport to be locally resolved into conduction, parallel convection, and drift convection components. Similarly, particle transport was categorized into drift and parallel components. These calculations demonstrate that in relatively cool plasmas (Tₑ &lt; 30eV), drifts compose a significant amount of the heat and particle transport. This violates the assumptions of simple two-point modeling and demonstrates the importance of accounting for drifts in modeling. Drifts may explain the boundary’s lack of sensitivity to poloidal flux expansion in the partially detached regime. Lastly, the anomalous heat and particle transport coefficients, χ⊥ and D⊥, were calculated by enforcing local power and particle balance. Values of χ⊥ close to the separatrix (ρ &lt; 1.005), and values of D⊥ were consistent with standard modeling practices. However,χ⊥ measurements sufficiently far into the CFR(ρ &gt; 1.005) exceeded typical modeling assumptions by two orders of magnitude. This implies that boundary codes will underestimate the radial temperature falloff length. This is shown to be true in a comparison of Tₑ measurement to simulations performed with the SOLPS-ITER code. This brings into question the validity of the assumption of diffusive heat transport in the far CFR.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156659</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Entanglement and Chaos in Quantum Field Theory and Gravity</title>
<link>https://hdl.handle.net/1721.1/156658</link>
<description>Entanglement and Chaos in Quantum Field Theory and Gravity
Wei, Annie Y.
In this thesis we explore several questions at the intersection of quantum information theory and quantum many-body physics. We study properties like entanglement and chaos, and we use intuition from discrete, few-body systems to learn about continuum systems. First we study quantum scars, a phenomenon previously studied in chaotic, few-body quantum systems, and we extend the analysis from the case of few-body quantum mechanics to the case of quantum field theory. Next we turn to the study of multipartite entanglement. Inspired by the operational interpretation of bipartite entanglement, we propose a new information-theoretic measure for tripartite entanglement based on subsystem recoverability, and we study this quantity in the vacuum state of (1+1)-D conformal field theory. Then we consider toy models of quantum gravity, where the objective is to construct qubit models that reproduce aspects of holography. We study toy models that consist of putting a lattice gauge theory on a tensor network, and we show how such toy models can be made background-independent. Finally we propose a new tensor network toy model for 3D gravity that features a topologically defined area operator, such that the areas on crossing cuts do not commute.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156658</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Illuminating the Cosmos: dark matter, primordial black holes, and cosmic dawn</title>
<link>https://hdl.handle.net/1721.1/156657</link>
<description>Illuminating the Cosmos: dark matter, primordial black holes, and cosmic dawn
Qin, Wenzer
The Λ-CDM model of cosmology has done much to clarify our picture of the early universe. However, there are still some questions that Λ-CDM does not necessarily answer; questions such as what is the fundamental nature of dark matter? What is its origin? And what causes the intriguing measurements that we are seeing from cosmic dawn? In this thesis, I will describe three directions in which I have pushed forward our understanding of how fundamental physics manifests in cosmology. First, I have studied the signatures of exotic energy injection in various astrophysical and cosmological probes, including the Lyman-α forest, the blackbody spectrum of the cosmic microwave background, the power spectrum of the cosmic microwave background, and the formation of the earliest stars in our universe. Second, I have investigated the formation of primordial black hole dark matter in a general model for inflation with multiple scalar fields. I have identified the space of models that can generate primordial black holes while remaining in compliance with observational constraints using a Markov Chain Monte Carlo, and also showed that future gravitational wave observatories will be able to further constrain these models. Finally, I have developed an analytic description of signals from 21cm cosmology using methods inspired by effective field theory. This method includes realistic observational effects and has been validated against state-of-the-art radiation hydrodynamic simulations, including those with alternative dark matter scenarios. With these recent efforts, we are advancing the frontiers of dark matter phenomenology and cosmology, thereby paving the way towards illuminating the remaining mysteries of our cosmos and drawing closer to a comprehensive understanding of the universe.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156657</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Hydrogen Transport in BABY</title>
<link>https://hdl.handle.net/1721.1/156656</link>
<description>Modeling Hydrogen Transport in BABY
Weaver, Colin
Fusion energy stands as a beacon of hope in the realm of sustainable power generation, offering the potential to meet global energy demands without adverse environmental impacts. However, fusion power is not yet commercially viable, and significant research is still needed to develop the technologies necessary to make it a reality. Central to the realization of practical fusion reactors is the efficient management of tritium, a scarce and radioactive isotope that serves as the fuel in a deuterium-tritium fusion reaction. The liquid immersion blanket concept, pioneered by endeavors like Commonwealth Fusion Systems, represents a significant stride towards addressing the challenges of tritium breeding and extraction. At the forefront of this endeavor lies the LIBRA Experiment, the goal of which is to better understand tritium breeding, containment, and extraction under a fusion-like neutron spectra, and BABY, a scaled-down iteration of the LIBRA Experiment designed to serve as a stepping stone to the full LIBRA Experiment. BABY serves as a testbed for evaluating Tritium extraction mechanisms and assessing the feasibility of achieving self-sufficiency in fuel production within a fusion power plant environment. In this context, understanding hydrogen transport phenomena within the BABY system emerges as a crucial aspect of optimizing tritium extraction and ensuring tritium self-sufficiency. By employing advanced modeling techniques simulating fluid flow and heat transfer to inform tritium transport simulations in FESTIM, this thesis endeavors to elucidate the intricacies of hydrogen migration mechanisms, diffusion rates, and their impact on tritium dynamics within the molten salt environment of BABY. Steady state simulations of bulk tritium transport coefficients provide results for the range of tritium transport coefficients for BABY, while transient simulations provide insight into the complex dynamics surrounding tritium transport through the various surfaces of BABY. The findings of this study hold profound implications for the fusion energy landscape, offering valuable insights that can inform the design and operation of future fusion reactors utilizing liquid immersion blankets. By elucidating the factors governing hydrogen transport in BABY, this research aims to contribute to the overarching goal of achieving sustainable and efficient fusion energy production.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156656</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elastically Housed Kinematic Couplings for InterchangeableElectric Vehicle Batteries</title>
<link>https://hdl.handle.net/1721.1/156655</link>
<description>Elastically Housed Kinematic Couplings for InterchangeableElectric Vehicle Batteries
Sams, Sarah A.
Commercial adoption of electric vehicle technologies has lagged due to immense charging downtime, but kinematic couplings have the potential to bridge this barrier by allowing for battery swaps and simultaneous operation and back-up battery charging. Designing parameters such as an elastic housing to damp the battery’s connection could overcome challenges with regard to manufacturing tolerance and operational loads. Accounting for proper preload and compliance for disturbance rejection is critical to maintain sufficient electrical contact while avoiding arcing. Kinematic couplings can provide enough contact area through Hertz line contact for a low resistance electrical contact, and Hertz stresses under loads are reasonable for conductive materials to bear without yield for long life cycles. This paper explored kinematic couplings as electrical conductors for electric vehicles by modifying a 2002 GEM E825. Kinematic coupling modifications decrease charging downtime by 98.33%. Elastic housings for ball-socket Kinematic couplings are predicted to increase the hertz contact area by 2684%, while decreasing the stress factor by 98.9% from a typical ball-groove kinematic coupling of the same size, allowing for larger vehicle batteries operated under higher forces and currents. Elastically housed kinematic couplings are a promising design pathway towards interchangeable electric vehicle batteries.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156655</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Characterization of a Wave Energy Converter Array Experimental Test Platform</title>
<link>https://hdl.handle.net/1721.1/156654</link>
<description>Design and Characterization of a Wave Energy Converter Array Experimental Test Platform
Herrero-Marques, Penelope
Wave energy is a promising source of renewable energy that has the potential to play a significant role in the global transition towards sustainable energy. Unlike other forms of renewable energy, wave energy is not dependent on weather patterns or daylight hours, making it a reliable and consistent source of energy. As the demand for clean energy continues to grow, wave energy can provide a valuable contribution to the global energy mix, helping to reduce greenhouse gas emissions and mitigate the negative impacts of climate change. Wave energy is harvested by wave energy converters—devices that convert the kinetic energy of ocean waves into electrical energy. Wave energy converter (WEC) arrays consist of multiple individual WEC devices that are arranged in a specific pattern. The arrangement of the devices within the array is designed to optimize their performance and reduce their negative effects on the surrounding environment. Developing reliable models of WEC array performance and optimal array configurations is critical to advancing research in this exciting field. This thesis details the design and validation of a test rig for experimentally testing wave energy converter array performance in the MIT Building 48 tow tank. The test rig features a novel magnetic damper that was designed and characterized to uniquely suit the conditions of the tow tank. The final test rig is capable of measuring the power captured by oscillating buoys as a function buoy shape, mass, and damping provided. Beyond facilitating hydrodynamics research, the test platform will be a valuable educational resource for classes such as Hydrodynamics that incorporate laboratory components. Its functionality will allow students to explore firsthand the principles of wave energy conversion, buoy dynamics, and the impact of various design parameters on energy capture. By providing hands-on experience, the test rig will enhance learning outcomes and cultivate a deeper understanding of renewable ocean energy technologies.Wave energy is a promising source of renewable energy that has the potential to play a significant role in the global transition towards sustainable energy. Unlike other forms of renewable energy, wave energy is not dependent on weather patterns or daylight hours, making it a reliable and consistent source of energy. As the demand for clean energy continues to grow, wave energy can provide a valuable contribution to the global energy mix, helping to reduce greenhouse gas emissions and mitigate the negative impacts of climate change. Wave energy is harvested by wave energy converters—devices that convert the kinetic energy of ocean waves into electrical energy. Wave energy converter (WEC) arrays consist of multiple individual WEC devices that are arranged in a specific pattern. The arrangement of the devices within the array is designed to optimize their performance and reduce their negative effects on the surrounding environment. Developing reliable models of WEC array performance and optimal array configurations is critical to advancing research in this exciting field. This thesis details the design and validation of a test rig for experimentally testing wave energy converter array performance in the MIT Building 48 tow tank. The test rig features a novel magnetic damper that was designed and characterized to uniquely suit the conditions of the tow tank. The final test rig is capable of measuring the power captured by oscillating buoys as a function buoy shape, mass, and damping provided. Beyond facilitating hydrodynamics research, the test platform will be a valuable educational resource for classes such as Hydrodynamics that incorporate laboratory components. Its functionality will allow students to explore firsthand the principles of wave energy conversion, buoy dynamics, and the impact of various design parameters on energy capture. By providing hands-on experience, the test rig will enhance learning outcomes and cultivate a deeper understanding of renewable ocean energy technologies.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156654</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Algorithms, Optimizations, and Benchmarks for Metric and Graph Clustering</title>
<link>https://hdl.handle.net/1721.1/156653</link>
<description>Parallel Algorithms, Optimizations, and Benchmarks for Metric and Graph Clustering
Yu, Shangdi
Clustering is a fundamental unsupervised machine learning task of detecting groups of similar objects in data. Clustering can be used to identify the underlying substructures of data and can detect essential functional groups, such as people with similar interests, news articles on similar topics, or proteins with similar utilities, which can then be used for various downstream tasks. In this thesis, we focus on the efficient clustering algorithms for metric and graph data. Both types of data are common in various applications today. Moreover, in modern applications, the size of both types of datasets and the dimensionality of the metric data are scaling rapidly. In this thesis, we solve the challenge of clustering large datasets by designing algorithms with high parallelism that take advantage of modern shared-memory multi-core machines and dynamic algorithms that can efficiently update the result without re-computing from scratch. We also present approximate clustering algorithms that scale to high-dimensional data.&#13;
&#13;
The first part of this thesis studies parallel clustering algorithms for low-dimensional metric data. Clustering algorithms are frequently expected to perform numerous similarity searches, as clustering entails grouping similar objects together. Although many algorithms have been designed for nearest neighbor search, many clustering algorithms require customized nearest neighbor search with special constraints, so we cannot use existing nearest neighbor search approaches of the shelf. We present examples of how to design customized similarity searches for hierarchical agglomerative clustering and density peaks clustering algorithms using optimized tree index data structures.&#13;
&#13;
The second part of this thesis studies parallel clustering algorithms for high-dimensional metric data. In this thesis, we show two approaches for clustering high-dimensional data. The first is to design approximate similarity searches that are customized for a particular clustering algorithm. The second is to convert the metric data into a graph representation, and then run graph clustering algorithms on this derived graph. For the first approach, we present an approximate density peaks clustering framework for high-dimensional data using approximate similarity searches. We also show that the framework has good empirical performance with graph-based nearest neighbor search techniques on high-dimensional data. For the second approach, we present an algorithm that clusters high-dimensional metric data by converting data into a particular graph representation called the triangulated maximally fltered graph and then running the directed bubble hierarchical tree algorithm on the converted graph.&#13;
&#13;
The final part of this thesis studies clustering algorithms for graph data. We present a dynamic graph clustering algorithm that can quickly update the output when the input changes instead of performing a slow recomputation from scratch. We also introduce a benchmarking suite for comprehensively evaluating the quality and speed of parallel graph clustering algorithms on both native graphs and k-nearest neighbor graphs converted from metric data. Our evaluation includes methods tailored to both weighted and unweighted graphs.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156653</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Learning Algorithms via Sublinear-Time Methods</title>
<link>https://hdl.handle.net/1721.1/156652</link>
<description>Enhancing Learning Algorithms via Sublinear-Time Methods
Vasilyan, Arsen
Our society increasingly relies on algorithms and data analysis to make critical decisions. Yet, almost all work in the theory of supervised learning has long relied on the following two assumptions: 1. Distributional assumptions: data satisfies conditions such as Gaussianity or uniformity. 2. No distribution shift: data distribution does not change between training and deployment. While natural and often correct, these assumptions oftentimes do not hold. Yet, these assumptions are routinely made for giving theoretical guarantees for supervised learning algorithms. These guarantees can become null and void, should one of these algorithms be used in a setting where these assumptions do not hold. Overall, if critical decisions rely on theoretical reliability guarantees, incorrect assumptions can result in catastrophic failure. The first part of this thesis shows how to mitigate this dependence. We introduce and develop testers which can alert a user if some assumptions are not satisfied. Leveraging insights from the area of property testing, the first part of this thesis constructs such testers for a number of well-studied function classes, addressing distributional assumptions and distribution shift. The second part of this thesis shows how insights from sublinear-time algorithms can also be used to make learning algorithms more runtime-efficient. We show that sublinear-time local algorithms, capable of deriving partial solutions by examining only a fraction of the input, can be used as a powerful primitive to resolve problems in learning theory.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156652</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable and Trustworthy AI for Evidence-based Clinical Decision Support in Cancer Care</title>
<link>https://hdl.handle.net/1721.1/156651</link>
<description>Reliable and Trustworthy AI for Evidence-based Clinical Decision Support in Cancer Care
Moon, Intae
The integration of cutting-edge AI methods with real-world clinical data has moved from being a novelty to a necessity in oncology. However, the deployment of AI faces challenges, including the complexity of reliably modeling longitudinal Electronic Health Records (EHR) characterized by missing data and frequent patient drop-outs, patient heterogeneity which leads to disparities in AI performance, and the need for validating AI models' clinical benefits, especially in managing challenging cancer cases. This thesis presents research focused on addressing these challenges: developing a continuous time model-based time-to-event regression framework to improve the prediction of clinically meaningful patient outcomes from irregularly sampled EHR data; utilizing data and algorithm-driven approaches to mitigate AI performance disparity for predicting cancer-associated adverse events across diverse patient demographics; and developing an AI-based decision support tool that integrates genomics and clinical data for evidence-based cancer care, with a focus on improving management of difficult-to-treat cancer cases. This work contributes towards transforming cancer care through reliable and trustworthy AI-driven clinical decision support.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156651</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Post-Quantum Verifiable Oblivious Pseudorandom Functions</title>
<link>https://hdl.handle.net/1721.1/156650</link>
<description>Post-Quantum Verifiable Oblivious Pseudorandom Functions
Propson, Helen
This work presents the construction of a post-quantum verifiable oblivious pseudorandom function (VOPRF) with a focus on efficiency and practicality. Leveraging lattice-based cryptographic primitives, particularly the Learning With Errors (LWE) problem, our VOPRF construction aims to address the limitations of existing approaches by reducing proof sizes. The key component in our work is the integration of an efficient zero-knowledge proof of knowledge (ZKPoK) protocol. This ZKPoK is notably more efficient than the proof systems used in prior VOPRF constructions, ensuring the verifiability of PRF outputs while providing smaller proof sizes. Our construction relies on the hardness of the ring-LWE and short integer solution (SIS) problems, and we demonstrate its security in the random oracle model. Overall, our VOPRF construction represents a step towards the development of more practical post-quantum secure cryptographic protocols, highlighting the potential for further improvements in efficiency and real-world applicability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156650</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Framework for Analysis of Softball Pitching, as Applied to Legal and Illegal Pitches</title>
<link>https://hdl.handle.net/1721.1/156649</link>
<description>A Framework for Analysis of Softball Pitching, as Applied to Legal and Illegal Pitches
Pendowski, Katia D.
In response to the NCAA’s 2023 rule change allowing softball pitchers to legally disengage from the playing surface while delivering a pitch, this study develops a framework to analyze and compare the legal drag, legal leap, and illegal replant pitching techniques. By developing a pose estimation algorithm and Recurrent Neural Network (RNN) for use on videos of real collegiate pitchers, we aim to distinguish physiological differences between these types of pitches and use our RNN to automatically detect illegal pitches. Our pose estimation results demonstrate the algorithm's effectiveness in extracting patterns from pitching videos. Key features such as the distance between the pitcher’s right knee and right toe, as well as the right toe x-position vs. time, emerge as crucial indicators for distinguishing legal and illegal pitches. The RNN achieved an accuracy of 71.4%, with a loss rate of 0.875. This framework offers a data-driven approach to softball pitching mechanics, providing valuable insights for researchers and coaches alike.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156649</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Origami Flasher-Inspired Deployable Structures Through Dynamic and Experimental Modeling</title>
<link>https://hdl.handle.net/1721.1/156648</link>
<description>Analysis of Origami Flasher-Inspired Deployable Structures Through Dynamic and Experimental Modeling
Bai, Jane
The Origami “flasher” model holds immense engineering promise due to its ability to alternate between a compressed 3-dimensional form and a deployed 2-dimensional form. While zero-thickness mathematical models have been thoroughly covered, dynamic modeling and material exploration are essential for the successful design of finite-thickness models. In this research, the mathematical effects of parameters such as center polygon size, unit panel length, and crease arrangement on flasher surface area optimization are first established. Software is then used to create a dynamic model that combines kinematic analysis with material properties to visualize the folding geometry and internal strain of the flasher pattern and to identify points of analysis for the experimental model. Finally, a stored-energy-based deployable experimental model is made using Yupo paper and video analysis done to understand damping behavior, deployment trajectory, and torque distribution. A discussion on design considerations for flasher patterns follows and potential topics for future research are set forth.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156648</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neuro-Symbolic Learning for Bilevel Robot Planning</title>
<link>https://hdl.handle.net/1721.1/156646</link>
<description>Neuro-Symbolic Learning for Bilevel Robot Planning
Silver, Tom
Decision-making in robotics domains is complicated by continuous state and action spaces, long horizons, and sparse feedback. One way to address these challenges is to perform bilevel planning, where decision-making is decomposed into reasoning about “what to do” (task planning) and “how to do it” (continuous optimization). Bilevel planning is powerful, but it requires multiple types of domain-specific abstractions that are often difficult to design by hand. This thesis proposes the first unified system for learning all the abstractions needed for bilevel planning. Beyond learning to make planning possible, this thesis also considers learning to make planning fast, especially in environments with many objects. A final contribution considers planning to learn, where the robot iteratively plans online to collect additional data and then learns to improve planning. Altogether, the thesis represents a step toward a general-purpose robot that can autonomously synthesize a specialized library of abstractions and plan to solve a very broad set of tasks.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156646</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Learning-guided Search for Coordination of Multi-agent Transportation at Scale</title>
<link>https://hdl.handle.net/1721.1/156645</link>
<description>Towards Learning-guided Search for Coordination of Multi-agent Transportation at Scale
Yan, Zhongxia
While transportation is an age-old problem, new technologies for autonomy raise new possibilities and realities for coordination of hundreds or thousands of vehicles and robots: criss-crossing autonomous vehicles, faster/cheaper Amazon delivery, and robots warehouses for storage/sorting/fetching. How do we tackle these new optimization challenges? In this thesis, I highlight multiple levels of decision-making in large-scale transportation problems, ranging from assignment of tasks to collision-free path/motion planning and everything in between (e.g. order of goals, routing, order of crossing, lane changing, continuous acceleration control). As practical solutions must be obtained in limited time, we leverage machine learning policies embodying offline experience to improve decision making. However, as we find in coordination of autonomous vehicles, policy learning alone may accommodate highly nonlinear continuous system dynamics but is insufficient in addressing the combinatorial discrete decisions in high-dimensional multi-agent systems. Thus, we investigate a more effective paradigm for tackling multi-agent transportation problems which involves 1) identifying or designing well-suited search-based algorithms for the problem settings then 2) designing machine learning approaches for guiding and accelerating the search algorithm. For problems ranging from vehicle routing problems (VRPs) to multi-agent path finding (MAPF), we find that, while the design of well-suited search-based algorithm is important, deep neural networks policies consistently accelerates or improves the solution quality of state-of-the-art search algorithms while eliminating the need for hand-designed search heuristics. With extensive empirical evaluations, we demonstrate that such learned policies often generalize beyond their training distributions to broader problem distributions. Finally, we return to the problem of autonomous vehicle coordination to design efficient search algorithms leveraging the structures of crossing orders at intersections with continuous vehicle kinematics, motivating further research in learning-guided crossing order search and semi-centralized coordination of vehicles/robots.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156645</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Planning in Uncertain, Dynamic Environments</title>
<link>https://hdl.handle.net/1721.1/156644</link>
<description>Robot Planning in Uncertain, Dynamic Environments
Cheerla, Anika
Many real-world applications require robots to operate in dynamic environments characterized by moving objects or agents whose trajectories are unpredictable. This thesis addresses the challenges posed by such environments by introducing Relative Temporal Probabilistic Roadmaps (Rel-T-PRM), a novel motion planning algorithm that builds upon the Temporal Probabilistic Roadmap (T-PRM) algorithm. The Rel-T-PRM allows for variable dynamic obstacle size, enables robustness with respect to minor changes in time and position and and introduces the concept of waiting until obstacles clear. Furthermore, we leverage Rel-T-PRM’s strengths to propose two replanning strategies. The first attempts to rapidly replan on-the-fly by using waiting to modify the trajectory without needing to modify the path. The second proposed replanning strategy identifies and plans to safe locations, where the robot can safely replan under a longer time horizon. We demonstrate Rel-T-PRM through a variety of simulation experiments on a fixed-base robotic manipulator.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156644</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of Fasp for Li-Ion Cathode Material Production</title>
<link>https://hdl.handle.net/1721.1/156643</link>
<description>Optimization of Fasp for Li-Ion Cathode Material Production
Hickey, Connor
The need to improve battery technology is higher than ever given the projected increase in battery consumption in the next decade and beyond. One key limiting factor of batteries is their cost, and one major way to reduce battery costs is by decreasing the time needed to produce them. The FASP system uses Flame-Assisted Spray Pyrolysis, and the principles of combustion to speed the process of creating the materials for lithium-ion cathodes, more specifically the NCM-811 variation of the lithium-ion batteries. This study aims to identify key factors in improving the powder production of the FASP system. One way it aims to do this is by creating a CFD simulation, within ANSYS, to create an accurate picture of the behaviour of the fluids within the pipe flow. Another way it aims to optimize FASP is by conducting various experiments to test the simulations and try to find areas of disagreement to find a direction to improve the CFD model. The final way this paper aims to optimize FASP is by conducting several powder-producing experiments and testing various variables to find the best combination.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156643</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay between Correlation and Topology in Two-dimensional Systems</title>
<link>https://hdl.handle.net/1721.1/156642</link>
<description>Interplay between Correlation and Topology in Two-dimensional Systems
Dong, Zhihuan
A new class of 2d materials, moiré superlattices, has emerged and become one of the most exciting playgrounds for the study of many-body physics. These systems, thanks to their unprecedented tunability, have exhibited a plethora of interesting phenomena in experiments, such as unconventional superconductivity, strange metal behavior, exciton condensation, emergent Kondo lattice physics, quantum Hall ferromagnetism, (fractional) quantum anomalous Hall effect, and formation of anomalous Hall crystals. Typically in many of these systems, there is a nearly flat low-energy band with non-zero Berry curvature. This generalizes the familiar quantum Hall physics to a broader context, where both dispersion and quantum geometry can be varied. This thesis focuses on novel quantum phases in systems where three ingredients, kinetic energy, interaction, and band topology all play a role. First, we demonstrate quantum Hall ferromagnetism in a topological band, which is a simple yet striking example of the crucial role of band topology in the consequence of strong correlations. Motivated by this, we present a fruitful framework to think about the effects of interaction within a topologically nontrivial band, known as non-commutative field theory. This development provides an analytical handle to this broad class of challenging problems and settles long-standing puzzles shrouding quantum Hall physics. Most of the existing studies focus on the strong correlation effect on a stage defined by band topology. However, this picture is only justified when the single-particle band gap dominates over interaction. Going beyond this regime, we study two representative moire systems (1) Quantum anomalous Hall effect in transition metal dichalcogenide moire and (2) Fractional quantum anomalous Hall effect in multilayer rhombohedral graphene moire. In these systems, instead of playing a role on the stage defined by the band topology, the interaction is strong enough to determine the band topology. We identify various mechanisms for interactions to stabilize a Chern band.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156642</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Organizational Economics and Strategy</title>
<link>https://hdl.handle.net/1721.1/156641</link>
<description>Essays in Organizational Economics and Strategy
Quist, Kramer
In these essays, I explore how organizational structure interacts with other features of an organization to influence strategy. In the first paper, I consider how an organization’s cognitive diversity interacts with organizational structure to influence the degree to which the organization chooses to pursue exploratory new ideas. In the second paper, I consider when delegation motivates or demotivates employees. In the final paper, I consider how different types of communication technologies complement different types of organizational structures.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156641</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering</title>
<link>https://hdl.handle.net/1721.1/156640</link>
<description>ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering
Cai, Miranda J.
Graph sampling extracts representative samples of a graph, so that approximate graph algorithms can be used in place of expensive, exact algorithms while still achieving highquality results. Thus, graph sampling plays an important role in many modern graph-based applications, such as graph machine learning and graph data mining. However, because of unstructured sparsity in the graph data and the randomness in the sampling algorithms, graph sampling often is the computational bottleneck. To accelerate it, there exist parallel graph sampling methods on multicore CPUs or GPUs. However, limitations arise at both sides. Due to lower throughput, CPU implementations are much slower than GPU ones, while GPU memory capacity is limited to only being able to handle small input graphs. We present the idea behind a scalable graph sampling framework, ScaleGPS, to support high performance graph sampling on huge graphs in a single machine with a CPU and a GPU. The key idea is to cooperatively employ data caching and compression to reduce memory footprint and data movement overhead, and thus achieve high performance and scalability. The challenge in applying caching and compression for graph sampling is two-fold. First, the randomness in sampling leads to redundant computation and memory accesses, and thus low work efficiency. Second, real-world graphs often exhibit skewed degree distribution, where a f ixed strategy cannot optimally handle various cases. We propose a hybrid and adaptive strategy to address this challenge. First, we split the vertices in the graph into two groups based on their degrees. For each group, we store the neighbor lists in different formats, to make full use of the scarce GPU memory resources. Based on this hybrid compression method, we use the GPU memory as a cache of the CPU memory, and adaptively cache hot data to minimize the data movement overhead between the CPU and GPU. We implement our strategy in ScaleGPS and evaluate it on a single machine with a 48-core CPU and an A100 GPU. Our experimental results on various sampling algorithms show that ScaleGPS is able to support billion-edge graphs (up to 84-billion) in a single machine. While the performance benefits over these large graphs are still undetermined, ScaleGPS achieves an average of 33.4× (up to 93×) speedups for smaller graphs over state-of-the-art parallel CPU implementations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156640</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>MashupMuse: A Web Application for Easier Music Mashup Creation</title>
<link>https://hdl.handle.net/1721.1/156639</link>
<description>MashupMuse: A Web Application for Easier Music Mashup Creation
Meng, Julie
The intersection of music and technology enables a form of musical expression known as a music mashup—a creative work that combines elements from multiple existing songs into a new, cohesive piece. The traditional process for creating a mashup with standard music editing software can be time-consuming for experienced mashup creators and intimidating for new creators. This software has a steep learning curve and more functionality than required for mashup enthusiasts. Over the last fifteen years, researchers have attempted to simplify this process through solutions with user-friendly interfaces for streamlined mashup creation. With the rise of artificial intelligence, some recent tools automate the mashup process entirely, which strips users of creative control and potentially leads to musically unsatisfying results. Current mashup software falls short either in functionality or userfriendliness, leaving a need for a platform that balances technological assistance and creative freedom. In response to this need, we propose MashupMuse, a web application that simplifies music mashup creation by automating certain parts of the mashup creation process, while simultaneously leaving room for creative freedom. MashupMuse separates each song’s audio into individual tracks, such as vocals, bass, and drums. It allows users to select sections from these tracks and arrange them on a master track while automatically handling beat and key adjustments. This balance of automation and creative freedom offers users a streamlined yet flexible music editing experience. During user testing, we found notable advantages in comparison with a similar mashup creation application. Finally, we outline future work to further improve the user experience.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156639</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motion-Compensated Viewpoint Shift</title>
<link>https://hdl.handle.net/1721.1/156638</link>
<description>Motion-Compensated Viewpoint Shift
Tao, Julius L.
Eye contact is an essential social cue that conveys our attention to others but is difficult to maintain during video calls. Many existing methods to synthesize a gaze-corrected view involve estimating a 3D face model and projecting it into the desired camera view, which is too computationally expensive for most personal computers. By drawing inspiration from 2D methods of video frame interpolation, we wish to not only correct eye gaze but also better align the face towards the camera without this expensive 3D modeling. Our findings suggest that adding a second webcam opposite the first and interpolating between the two outer camera views can give realistic, gaze-aligned center views. We conclude that the prevailing approach of 3D modeling is surprisingly not necessary for gaze correction. Not only do 2D techniques suffice, but their synthesized frames can appear more natural than prior results. We believe that this work is a crucial step towards true-to-life viewpoint shift for live video conferences.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156638</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Wearable Device to Inform Pressure Injury Prevention Support Surfaces Selection and Design</title>
<link>https://hdl.handle.net/1721.1/156637</link>
<description>A Wearable Device to Inform Pressure Injury Prevention Support Surfaces Selection and Design
Sapozhnikov, Katherina
Pressure injuries are a preventable but persistent medical challenge, with 2.5 million Americans developing pressure injuries each year. Pressure injuries are uniquely challenging to manage for wheelchair users, who have to sit for extended periods of time, up to 10-12 hours per day. Measuring the interface pressure between support surfaces and the body can assist in selecting surfaces that minimize the pressure to prevent pressure injuries from developing. However, pressure mapping systems are expensive and inaccessible for personal use outside of rehabilitation centers and hospitals. A prototype was developed to measure the interface pressure and movements of the user, using force sensing resistors and accelerometer data. Through this system, the interface pressure across surfaces can be compared to select appropriate sitting surfaces, inform repositioning habits, and prevent pressure injury development.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156637</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pressure Swing Assisted Desorption for Atmospheric Water Collection</title>
<link>https://hdl.handle.net/1721.1/156636</link>
<description>Pressure Swing Assisted Desorption for Atmospheric Water Collection
Kim, Haeri
Ensuring water equity is an urgent and challenging issue in the light of climate change, global conflict, and socioeconomic disparities. Atmospheric water harvesting provides a promising option to collect water from the air, even in considerably dry conditions (RH &lt; 40%), expanding the reach of this application to areas that would be particularly prone to clean water scarcity. The MIT Device Research Lab is developing a device for such applications that would be capable of producing drinkable water in even extremely dry environments. Current methods of atmospheric water harvesting focus on thermal desorption, where heat is applied to release the water vapor from the sorbent. Another method worth exploring is utilizing pressure swings to release this water vapor, and seeing how a combined method of thermal and depressurized desorption would affect the efficiency of this device. An initial MATLAB model showed that the methods, in order of slowest to fastest, should be (1) solely pressure swing desorption, (2) solely temperature swing desorption, and (3) the combined method using simultaneous pressure and temperature swings. The results from the experiment using the MOF UiO-66 as the sorbent showed that the combined procedure would indeed be the fastest, potentially twice as fast as a purely thermal desorption method and five times faster than a purely depressurized desorption method. The next step following this project would be the assembly of a vacuum-grade enclosure in which a small scale test unit of the device the MIT DRL is developing. A detailed design and brief procedure is included in the final section of this thesis.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156636</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Next-Generation Intelligent Portfolio Management</title>
<link>https://hdl.handle.net/1721.1/156635</link>
<description>Next-Generation Intelligent Portfolio Management
Zhao, Zijie
In the fast-paced world of financial technology, the integration of advanced Natural Language Processing (NLP) and Deep Reinforcement Learning (DRL) is transforming portfolio management. This thesis presents a pioneering portfolio management framework that leverages Transformer-based models and Large Language Models (LLMs) to enhance return predictions and sentiment extraction from extensive financial texts coupled with robust DRL trading agents to optimize portfolio performance. We introduce an adaptive retrieval-augmented framework for LLMs, finely tuned through instruction tuning to align with human instructions and incorporate market feedback. This approach enables dynamic weight adjustments within the Retrieval-Augmented Generation (RAG) module, showcasing the synergy between extracting more accurate underlying sentiment and better capturing stock movements, resulting in more profitable and robust portfolios. Additionally, we address the challenges of applying DRL to stock trading by developing the Hierarchical Reinforced Trader (HRT). This innovative strategy employs a bi-level DRL framework that combines strategic stock selection via a High-Level Controller with effective trade executions managed by a Low-Level Controller. Our results demonstrate significant enhancements in portfolio management, achieving higher Sharpe ratios than the S&amp;P 500 benchmark in bullish markets, while also substantially reducing losses and drawdowns in bearish and volatile market scenarios. Moreover, model interpretability is crucial given the black-box nature of both LLMs and DRL models. Practitioners without a strong machine learning background require clear interpretations of model outputs. To address this, one idea is to consider features univariately, omitting feature interactions to maintain interpretability. The Univariate Flagging Algorithm (UFA) identifies optimal cut points for each feature, flags them, and summarizes them to lower dimensions for each sample. We further enhance the UFA framework within the Generalized Additive Model (GAM), extending it to a broader framework capable of modeling any data generated by exponential family distributions. Our comparative analysis on various public benchmark datasets demonstrates that our extended framework not only achieves better predictive results than the original UFA but also retains its robustness against missing and imbalanced datasets. In conclusion, this thesis underscores the significant potential of integrating advanced NLP and DRL techniques into portfolio management, setting a new standard for intelligent financial decision-making.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156635</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lab-to-Fab Monolithic 3D Integrated Carbon Nanotube Transistors: Scaling and Reliability</title>
<link>https://hdl.handle.net/1721.1/156634</link>
<description>Lab-to-Fab Monolithic 3D Integrated Carbon Nanotube Transistors: Scaling and Reliability
Yu, Andrew C.
Conventional scaling of silicon integrated electronics can no longer yield improvements that keep pace with the increasing computing demands of abundant-data applications. Moreover, for data intensive computing applications, a majority of system energy is consumed moving data between compute and off-chip memory, which are often physically separate with limited connectivity. This is termed the “memory wall”. A promising solution to this problem is monolithic 3D integration, in which layers of compute and memory are designed and integrated together vertically in the same monolithic 3D nanosystem, connected by ultra-dense, nanoscale interconnects, referred to as interlayer vias (ILVs). This provides significant projected system-level energy-delay benefits beyond conventional 2D physical and equivalent scaling. However, conventional silicon logic and memory technologies are incompatible with such monolithic 3D integration and cannot be used to realize such 3D nanosystems.&#13;
&#13;
In this thesis, I first develop, and then establish within a commercial foundry, a monolithic 3D technology using back-end-of-line (BEOL) carbon nanotube FET (CNFET) + Resistive RAM (RRAM) stack over silicon CMOS that achieves comparable memory performance (read power, write energy/latency, endurance, retention, multiple bits-per-cell capability) in the same footprint as a conventional RRAM stack using front-end-of-line (FEOL) silicon FET access transistors. This is accomplished through the following: (1) I develop the first CNFET process that is lift-off-free and can scale to advanced process technology nodes, (2) I lab-to-fab transfer and adapt this process from an academic prototype into a commercial CMOS foundry process on 200 mm wafers at a 90 nm technology node equivalent, and (3) I improve the scaling, variation, and reliability of lift-offfree BEOL CNFET to achieve iso-performance, iso-footprint, and iso-reliability BEOL memory metrics. This process is established within SkyWater Technology Foundry (90/130nm technology 3 node on 200 mm Si wafers) and an apples-to-apples comparison is made directly versus FEOL Si FET + RRAM fabricated on the same wafers, from the same foundry, at the same node.&#13;
&#13;
Such BEOL CNFET + RRAM technology promises to unlock a large architecture design space with significant system-level energy-delay product (EDP) benefits vs. FEOL Si + RRAM-only designs, e.g., &gt;5× EDP benefits for new iso-footprint, iso-memory-capacity monolithic 3D architectures uniquely enabled by new monolithic 3D physical design. In summary, this thesis experimentally implements and demonstrates foundry monolithic 3D using beyond-silicon nanotechnologies as a complementary integration path for dramatically improving system-level energy-efficiency and performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156634</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning the language of biomolecular interactions</title>
<link>https://hdl.handle.net/1721.1/156633</link>
<description>Learning the language of biomolecular interactions
Sledzieski, Samuel
Proteins are the primary functional unit of the cell, and their interactions drive cellular function. Interactions between proteins are responsible for a wide variety of functions raning from catalytic activity to cellular transport and signaling, and interactions between small molecules and proteins are the foundation of many therapeutics. However, the experimental determination of these interactions is expensive and relatively slow, limiting the ability to model interactions at genome scale. It is therefore critical to develop computational approaches for modeling these interactions. Unsupervised language models trained on amino acid sequences, namely protein language models, learn patterns in sequence evolution that encode protein structure and function. These protein language models are thus a powerful tool for extracting features of proteins, enabling the adoption of lightweight downstream models. Here, we present novel machine learning techniques for adapting protein language modeling to the prediction of protein interactions at scale, enabling de novo interaction network inference and large-scale drug compound screening. We show that these methods achieve state-of-the-art performance, and allow us to discover new biology and therapeutic candidates. In addition, we introduce methods for efficient training and adaptation of these models, and outline several applications which take advantage of the scale enabled by lightweight models. As a whole, this thesis demonstrates how computational advances in language modeling and the massive growth of data brought about by the sequencing revolution can be leveraged to tackle the genotype-to-phenotype challenge in biology, and lays the groundwork for more widespread adoption of these techniques for proteomic modeling.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156633</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible Privacy via Disguising and Revealing</title>
<link>https://hdl.handle.net/1721.1/156632</link>
<description>Flexible Privacy via Disguising and Revealing
Tsai, Lillian
Users have tens to hundreds of accounts with web services that store sensitive data, from social media to tax preparation and e-commerce sites. While users have the right to delete their data (via e.g., the GDPR or CCPA), more nuanced data controls often don’t exist. For example, a user might wish to hide and protect their profiles on an e-commerce or dating app when inactive, and to recover their accounts should they return to the application. However, services often provide only coarse-grained tools that result in all-or-nothing exposure of users’ private data.&#13;
&#13;
This thesis introduces the notion of *disguised data*, a reversible state in which sensitive data is hidden. To demonstrate the feasibility of disguised data, this thesis also presents Edna— the first system for disguised data—which helps database-backed web applications provide new privacy features for users, such as removing their data without permanently losing their accounts, anonymizing their old data, and selectively dissociating personal data from public profiles. Edna helps developers support these features while maintaining application functionality and referential integrity in the database via *disguising* and *revealing* transformations. Disguising selectively renders user data inaccessible via encryption, and revealing restores their data to the application. Edna’s techniques allow transformations to compose in any order, e.g., deleting a previously anonymized account, or restoring an account back to an anonymized state.&#13;
&#13;
With Edna, web applications can enable flexible privacy features with reasonable developer effort and moderate performance impact on application operation throughput. In the Lobsters social media application—a 160k LoC web application with &gt;16k users—adding Edna and its features takes &lt;1k LoC, and decreases throughput 1–7% in the common case. Edna decreases throughput up to 28% when a heavy user who owns 1% of all application data continuously disguises and reveals their account.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156632</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design</title>
<link>https://hdl.handle.net/1721.1/156631</link>
<description>Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design
Wang, Hanrui
Quantum Computing (QC) has the potential to solve classically hard problems with greater speed and efficiency, and recent years have seen exciting advancements in this field. However, significant gaps remain between application requirements and the capabilities of current devices, particularly in terms of software framework support, efficiency, and reliability. To bridge these gaps and fully unleash the power of quantum computing, it is critical to perform AI-enhanced co-design across various technology stacks, from algorithm and program design to compilation and hardware architecture. In this thesis, we aim to develop architectural and system supports for quantum computing. To address the software support gap, I will discuss two compilation frameworks—FPQAC and Q-Pilot—designed for the Field-Programmable Qubit Array (FPQA) implemented with emerging reconfigurable neutral atom arrays. This architecture leverages movable atoms for routing two-qubit gates and we optimize atom movements and gate scheduling for high scalability and parallelism. To enhance reliability, I will introduce QuantumNAS and TorchQuantum, frameworks for quantum program structure (ansatz) design for variational quantum algorithms. QuantumNAS employs an intelligent search engine and utilizes noisy feedback from quantum devices to optimize program structure and qubit mapping tailored to specific hardware, leading to significant resource reduction and reliability improvements. Additionally, I will present QuantumNAT, QOC, and RobustState for noise-aware training of parameters in variational quantum algorithms to ensure high reliability. The DGR framework will also be discussed for addressing drifted and correlated errors in quantum error correction decoding. Furthermore, I will introduce QuEST, which leverages data-driven AI models to predict the reliability of arbitrary quantum circuits on real quantum hardware. Finally, to close the efficiency gap, I will present SpAtten, an algorithm-architecture-circuit co-design aimed at efficient Transformer-based quantum error correction decoding, and SpArch accelerator, designed for sparse tensor algebra for efficient quantum control signals generations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156631</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Fabrication and Assembly for In Situ Manufacturing</title>
<link>https://hdl.handle.net/1721.1/156630</link>
<description>Computational Fabrication and Assembly for In Situ Manufacturing
Nisser, Martin Eric William
Fabrication today relies on disparate, large machines spread across industrial facilities. These are operated by domain experts to construct and assemble artefacts in sequential steps from large numbers of parts. This traditional, centralized mass manufacturing paradigm is characterized by large capital costs and inflexibility to changing needs, complex global supply chains hinged on economic and political stability, and waste and over-manufacturing of uniform artefacts that fail to meet the technical and personal needs of today’s diverse individuals and use cases. Today, these challenges are particularly severe at points of need, such as the space environment. The space environment is remote and unpredictable, and the ability to manufacture in situ offers unique opportunities to address new challenges as they arise. However, the challenges faced in space are often mirrored on Earth. In hospitals, disaster zones, low resource environments and laboratories, the ability to manufacture customized artefacts at points of need can significantly enhance our ability to respond rapidly to unforeseen events. In this thesis, I introduce digital fabrication platforms with co-developed hardware and software that draw on tools from robotics and human-computer interaction to automate manufacturing of customized artefacts at the point of need. Highlighting three research themes across fabrication machines, modular assembly, and programmable materials, the thesis will cover a digital fabrication platform for producing functional robots, a modular robotic platform for in-space assembly deployed in microgravity, and a method for programming magnetic material to selectively assemble.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156630</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical Sensing as a Utility using Swept-Source Raman Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/156629</link>
<description>Chemical Sensing as a Utility using Swept-Source Raman Spectroscopy
Persits, Nili
The integration of chemical sensing into everyday life is a decades-old dream that has so far failed to come to fruition. Many sensor technologies have been proposed and developed, but few can claim to be non-destructive, reagent-free, and suitable for multiple applications while also enabling significant scale-up and remaining cost-effective. &#13;
&#13;
This thesis proposes a utility service model for chemical sensing using Swept Source-Raman Spectroscopy (SSRS) that addresses these challenges. First, we introduce the SSRS fiber-probe that allows to measure Raman spectra with a single-point detector and only a few milliwatts of tunable laser excitation. We validate the probe design by monitoring nitrate fertilizer in a hydroponic setup, in environmental water samples, and in growing plants with sensitivity and resolution which are equivalent to benchtop systems. We further demonstrate the scaling up of SSRS into a sensor network by leveraging readily-available data communication optical fiber infrastructure. We showcase a 16-sensor network that uses the laser as a shared resource and develop an engineering-based cost model that supports the scaling up of this network to dozens of sensors deployed over kilometers. Lastly, we monitor metabolites in a therapeutic-producing cell culture, and use linear regression models and a-priori information of our samples to reduce the spectral acquisition time, making this sensor architecture competitive in both performance and cost to existing solutions. These findings represent significant progress towards achieving ubiquitous chemical sensing and facilitating the integration of chemical sensors into everyday life.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156629</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Cryptographically Private and Verifiable Computation through Hardware-Software Co-Design</title>
<link>https://hdl.handle.net/1721.1/156628</link>
<description>Practical Cryptographically Private and Verifiable Computation through Hardware-Software Co-Design
Samardzic, Nikola
Fully Homomorphic Encryption (FHE) and Verifiable Computation (VC) enable offloading computation to untrusted servers with cryptographic privacy and integrity guarantees. Despite their attractive security properties, FHE and VC are not widely adopted because (1) they suffer prohibitive performance overheads, about 10,000× to 1,000,000× over unencrypted and unverified computation, respectively and (2) they are hard to use even for expert cryptographers: porting non-trivial applications takes experts months of manual work.&#13;
This thesis contributes hardware and software techniques to make FHE and VC practical. Specifically, we present a full hardware and software stack for FHE that addresses its performance and usability challenges, consisting of hardware accelerators that erase FHE’s overheads, a redesign of the state-of-the-art FHE scheme to make accelerators more efficient, and an FHE compiler that produces efficient programs from high-level code. We then leverage the commonalities between FHE and VC to design an accelerator that reduces VC overheads.&#13;
F1 and CraterLake are FHE accelerators that improve performance over state-of-the-art by 10,000×. F1 is the first programmable FHE accelerator, and erases most performance overheads for smaller FHE programs. CraterLake builds on F1, and is the first accelerator able to support arbitrarily large FHE programs effectively.&#13;
F1 and CraterLake’s speedups bring with them new bottlenecks, mainly arithmetic efficiency. We present BitPacker, a new implementation of an FHE scheme that keeps encrypted data packed in fixed-size words, enabling near-full arithmetic efficiency in accelerators. BitPacker is the first redesign of an FHE scheme that targets accelerators. On CraterLake, BitPacker improves performance by gmean 59% and up to 3×, and reduces energy by gmean 61%.&#13;
To make the performance we unleashed accessible to non-experts, we contribute Fhelipe, a compiler that abstracts away FHE’s implementation details and hides its complex and restrictive programming interface. Fhelipe translates high-level tensor programs into optimized FHE circuits that can then be executed on CraterLake or a CPU. Fhelipe produces compiled programs that match or exceed the performance of state-of-the-art manual implementations. It also outperforms prior FHE compilers by gmean 18.5× on a wide set of benchmarks.&#13;
While FHE provides data privacy, it does not provide integrity. NoCap is a hardware accelerator that enables practical integrity by speeding up verifiable computation by 40× over state-of-the-art accelerators and by 580× over CPU.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156628</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heterogeneous Integration of Spin-Photon Interfaces with s Scalable CMOS Platform</title>
<link>https://hdl.handle.net/1721.1/156627</link>
<description>Heterogeneous Integration of Spin-Photon Interfaces with s Scalable CMOS Platform
Li, Linsen
A central challenge in the development of long-range high-speed quantum networks and fault-tolerant quantum computing is the generation of a large-scale entanglement of quantum systems. Color centers in diamonds have emerged as a leading quantum information processing platform, satisfying the DiVincenzo criteria for quantum computing and recently enabling the quantum advantage in communications. However, it is estimated that for general-purpose quantum information processors, millions to billions of high-quality physical qubits will be required, motivating the need for hardware architectures that are highly scalable by leveraging modern semiconductor integrated systems. &#13;
&#13;
Here, we introduce a scalable quantum information processing hardware architecture in a proof-of-concept consisting of an addressable and tunable two-dimensional array of tin-vacancy centers, hybrid integrated on a foundry process electronics control chip. We demonstrate necessary components individually, like scalable high-yield heterogeneous integration between diamond nanostructure and foundry control chip, parallel control and measurement, tuning of quantum emitter emission wavelength as well as lifetime, and coherent light correlation with quantum emitters as a proof of concept for a scalable architecture capable of hosting thousands toward millions of qubits. Besides the experimental demonstration of the architecture, the thesis will include free-space spin-photon interface design, quantum emitter strain engineering, scalable high-quality fabrication technology, general architecture theoretical analysis, and AI-assistant quantum resource scheduling for a deep discussion of the different essential components of the system.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156627</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation Learning Associates Patients’ Risks for Metabolic Diseases with Features of Their Lipocytes</title>
<link>https://hdl.handle.net/1721.1/156626</link>
<description>Representation Learning Associates Patients’ Risks for Metabolic Diseases with Features of Their Lipocytes
Tan, Zipei
Polygenic risk scores (PRS) estimate an individual’s risk of developing a certain disease, suggesting that differences between cells of individuals with high versus low PRS could give us insight into the cellular disease mechanisms. To study metabolic diseases, we analyze the distribution of cell states of lipocytes of individuals with different PRS for metabolic diseases, thereby associating individual-level genotypes with cell-level features. To accomplish this, we make use of a recent large-scale lipocyte microscopy imaging dataset. By learning a representation of multi-channel lipocyte microscopy images using a convolutional autoencoder, we perform unsupervised clustering on the learnt representations to identify different cell states. We analyze the distribution of these cell states in different individuals and associate their PRS to the observed cell state distributions. Finally, we show that it is possible to generate counterfactual lipocyte images and understand the effect of increased or reduced PRS on cell states through transforming the learnt representations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156626</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multifaceted Understanding of Accreting Neutron stars and their Environments</title>
<link>https://hdl.handle.net/1721.1/156625</link>
<description>Multifaceted Understanding of Accreting Neutron stars and their Environments
Ng, Wei Chieh (Mason)
Accreting neutron stars are cosmic laboratories featuring some of the most extreme processes in the universe, hosting an accretion disk that supplies material that is magnetically channeled onto the magnetic poles of neutron stars. The emission from these accreting neutron stars peak in the X-rays, owing to gravitational potential energy loss as the accreting material falls under the strong gravitational well of the neutron star. The community has been utilizing X-ray timing and spectroscopy for decades to unravel the mysteries of these objects, with X-ray polarimetry being a recent development providing two additional observables.&#13;
&#13;
In my thesis, I showcase a multifaceted approach to studying accreting neutron star binaries, employing X-ray timing, spectroscopic, and polarimetry with many X-ray instruments to advance our understanding of the dynamics and evolution of these systems. I have also developed an end-to-end pulsation pipeline tool that is designed for rapid characterization of new X-ray transients, particularly for neutron stars. In the analyses undertaken as part of my thesis, I have incorporated multiple techniques and instruments to develop a comprehensive understanding of the phenomenology of many neutron star systems, such as accreting millisecond X-ray pulsars, ultraluminous X-ray pulsars, ultracompact X-ray binaries, and Z/atoll-state sources. It is through this multifaceted application that we can reveal a holistic description of neutron star binaries.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156625</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photocurrent Spectroscopy Study of Graphene / Hexagonal Boron Nitride Moiré Superlattice In the Far-Infrared Regime</title>
<link>https://hdl.handle.net/1721.1/156624</link>
<description>Photocurrent Spectroscopy Study of Graphene / Hexagonal Boron Nitride Moiré Superlattice In the Far-Infrared Regime
Yang, Jixiang
Two-dimensional (2D) materials and their heterostructures, especially those with moiré superlattices, have been one of the most fascinating topics in physics in recent years. Many interesting physics, for example the correlated insulating state at half- or quarter- fillings of the moiré band, happened in the far-infrared energy range. However, there are very few optical spectroscopic studies of these 2D materials due to many intrinsic limitations. In this thesis, I will introduce a method named Fourier-transform infrared (FTIR) photocurrent spectroscopy. I will discuss the advantage of this method, and why it is suitable for far-infrared studies of 2D materials. Then I will apply it to the monolayer graphene / hexagonal boron nitride (hBN) moiré superlattices, where I accurately measure the gap ∆ opened at charge-neutral point (CNP) by the moiré superlattice. The relationship between the gap size and the moiré wavelength will also be discussed. Finally, I will discuss the possibility of applying this technique to other novel physics phenomena and other 2D systems.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156624</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral-Timing Observations of Disk-Jet Coupling in Black Hole X-ray Binaries</title>
<link>https://hdl.handle.net/1721.1/156623</link>
<description>Spectral-Timing Observations of Disk-Jet Coupling in Black Hole X-ray Binaries
Wang, Jingyi
Accreting black holes are a fundamental tool to understand accretion and ejection physics, and are ideal laboratories to ultimately test Einstein's general relativity (GR) in the strongest gravity regime in the Universe. High-fidelity GR tests require a precise knowledge of the physical environments in which particles move. Two biggest challenges there are how close to the event horizon inspiraling gases reach, and how the relativistic jets are launched. The puzzle piece linking these two challenges together is the nature and geometry of the hot (hundreds of keV) X-ray emitting plasma called the “corona". X-ray Reverberation Mapping, where X-rays produced close to the BH reverberate off inspiraling gas, allows us to map out scales close to the event horizon -- orders of magnitude better than the resolution of our telescopes. Black hole X-ray binaries (BHXBs) are binary systems with a stellar-mass black hole and a companion star. They are usually transients as they cycle through phases of quiescence and outburst in which they exhibit different accretion states with distinct spectral-timing features, allowing us to study the accretion-ejection physics or disk-jet coupling in a single source on a human timescale. In MAXI J1820+070, I discover that the soft reverberation lag becomes longer during the hard-to-soft state transition, several days before the transient radio jet is observed. Together with the discovery that the reverberation lag gets shorter in the hard state while the compact jet becomes weaker, this result suggests a close relationship between the X-ray corona and the radio jet. The corona might be the base of the jet that expands and/or gets ejected during the state transition. In the "NICER reverberation machine", I expand the sample size of BHXBs where reverberation is detected from 3 to 11, and find the evolution of reverberation lag in the hard and intermediate states is a generic feature of BHXBs, and should be explained with state transition models. I explore simultaneous modeling of the flux-energy spectrum and cross spectra and a proof-of-concept to apply machine learning to fitting the cross spectra. I also study a BHXB IGR J17091--3624 that exhibits “heartbeat"-like variabilities in its 2022 outburst and find the source began in traditional hard and intermediate states and transitioned into an exotic soft state. I also discover one of the most coherent quasi-periodic oscillations, and find an interplay between heartbeats and iron emission/absorption line. These results lead to new insights into the physical nature of exotic variabilities and accretion disk instability.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156623</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Liquid-Crystal-on-Silicon Photonic Integrated Circuits with Millions of Degrees of Freedom</title>
<link>https://hdl.handle.net/1721.1/156622</link>
<description>Programmable Liquid-Crystal-on-Silicon Photonic Integrated Circuits with Millions of Degrees of Freedom
Wang, Archer
This thesis proposes a novel approach to photonics, wherein waveguides are formed entirely within a homogeneous liquid crystal layer using Liquid-Crystal-on-Silicon (LCoS) technology. Utilizing the electro-optical properties of LCs, we demonstrate the theoretical feasibility of inducing refractive index variations solely within the LC medium to guide light. This method diverges from traditional waveguiding techniques that rely on solid core and cladding structures, offering a new paradigm in reconfigurable photonic devices. Additionally, we develop and explore the idea of a programmable Multi-Mode Interferometer using LCoS technology, enabling the performance of arbitrary unitary transformations. Future work will focus on developing robust simulations of coupled-mode theory with liquid crystals, paving the way for next-generation photonic technologies that perform universal linear optics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156622</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Data-Driven Analysis to Determine the Electrical Needs of a Hybrid Powertrain System for Small, Hyper-Optimized, Track-Day Vehicles</title>
<link>https://hdl.handle.net/1721.1/156621</link>
<description>A Data-Driven Analysis to Determine the Electrical Needs of a Hybrid Powertrain System for Small, Hyper-Optimized, Track-Day Vehicles
Asa, Henry J.
In an effort to maximize the performance of RUSH Auto Work’s RUSH SR racecar, a hybrid powertrain system was designed and evaluated to estimate the performance gains from imple- menting such a system. An extensive Python program was developed to analyze real-world race data for the RUSH SR, determining energy losses while braking, the vehicle’s current acceleration capabilities, as well as the vehicle’s limitations. This ultimately quantified the vehicle’s current performance values/capabilities, and provided a strong foundation for the analyses that determined the anticipated implications of adding a hybrid powertrain system to the car. Despite the mass additions associated with adding an electric motor, battery pack, and additional components to control the system, the power gains from the system yielded a net greater power-to-weight ratio than the original vehicle without the hybrid sys- tem. An analysis of energy recuperation through regenerative braking demonstrated the potential to reduce the size of the battery pack (which decreases the mass of the system) without compromising on the power requirements and capabilities of the system. During periods of heavy braking, it was found that a significant portion of the battery could be recharged, allowing for significant reductions in the capacity of the battery pack.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156621</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approaches that Extend Healthcare: Algorithms &amp; Applications</title>
<link>https://hdl.handle.net/1721.1/156620</link>
<description>Machine Learning Approaches that Extend Healthcare: Algorithms &amp; Applications
Yang, Yuzhe
Modern clinical systems frequently exhibit sporadic patient visits, delayed diagnoses, and unequal care distribution among diverse populations. Often, diseases aren’t identified until they reach advanced stages. The scarcity of specialists and disparities in healthcare access further complicate the long-term monitoring, timely intervention, and unbiased assessments. This thesis addresses the above challenges by developing artificial intelligence (AI) and machine learning (ML) algorithms and building practical systems that use these algorithms to solve key problems in healthcare and medicine.&#13;
&#13;
Specifically, on the algorithms front, the thesis introduces principled ML approaches to achieve fair, unbiased, and generalizable AI models, addressing core challenges in real-world medical data which encompass four main axes:&#13;
• Label Scarcity: The thesis presents a novel self-supervised learning scheme that learns periodic and frequency information in data without labels, enabling representation learning for periodic tasks like vital signs estimation with minimal labeling efforts.&#13;
• Data Imbalance: The thesis develops new ML algorithms to address data imbalance in regression, filling the gap in techniques for practical imbalanced regression problems.&#13;
• Domain Generalization: The thesis presents theoretically grounded learning methods that ensure generalization across imbalanced domains and unseen environments.&#13;
• Subpopulation Shifts: The thesis studies learning in the presence of underrepresented subgroups, providing actionable insights for model deployment in real-world settings.&#13;
&#13;
On the applications front, the thesis develops new AI-driven biomarkers and systems for human disease and medicine leveraging the proposed algorithms, enabling discovery and advancing delivery and equity in healthcare:&#13;
• Early Diagnosis Biomarker for Parkinson’s: The thesis presents an AI-based biomarker for Parkinson’s disease that enables early detection years before standard clinical diagnosis, as well as longitudinal progression tracking using nocturnal breathing signals.&#13;
• In-Home Touchless Monitoring of Sleep Posture: The thesis designs novel AI systems for continuous and contactless sleep posture monitoring overnight in the user’s own home using wireless signals.&#13;
• Equitable Medical AI Deployments In The Wild: The thesis establishes best practices for medical imaging AI models that maintain their performance and fairness in deployments beyond their initial training contexts, across diverse populations and unseen sites.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156620</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-Efficient Hardware Architectures for Enhanced Secure Communication Systems</title>
<link>https://hdl.handle.net/1721.1/156619</link>
<description>Energy-Efficient Hardware Architectures for Enhanced Secure Communication Systems
Woo, Jongchan
In the era of digital transformation, the expansion of the Internet of Things (IoT) has been pivotal in driving innovations across various sectors. However, this expansion also brings forth heightened security risks, particularly in the communication between billions of connected devices. This thesis presents significant advancements in secure and reliable communication systems, crucial for addressing these risks within IoT infrastructures. It explores the development and integration of cryptographic solutions designed to enhance both the energy efficiency and reliability of communications. Central to this work is the CERMET framework, which integrates energy-efficient cryptographic techniques with both symmetric (AES) and asymmetric (ECC) encryption methodologies. This framework significantly reduces the energy demands of cryptographic operations, crucial in energy-constrained environments. Additionally, this research repurposes the padding bits of AES to improve error correction capabilities, thereby enhancing the reliability of data transmission across noisy channels. Together with the application of the Guessing Random Additive Noise Decoding (GRAND) decoder, these technologies are unified into a comprehensive system that assures robust security and data integrity. This work not only addresses the critical needs for energy efficiency in IoT but also sets a new benchmark for the security and robustness of communication systems, facilitating a scalable and adaptable solution for various IoT applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156619</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning for Sepsis Prognosis: Prediction Models and Dissecting Electronic Health Records</title>
<link>https://hdl.handle.net/1721.1/156618</link>
<description>Machine Learning for Sepsis Prognosis: Prediction Models and Dissecting Electronic Health Records
Liao, Wei
Sepsis is the body's extreme response to an infection. It is a life-threatening medical emergency. Given the heavy burden sepsis has posed on the health care system, extensive research in the area has been performed to facilitate sepsis diagnosis. Sepsis prognosis can support the assessment of the likely progression of the disease and thus inform treatment decisions, but it is much less explored. Here I present two approaches to build sepsis prognosis models. First, I introduced the idea of assessing neutrophil function from simple-to-obtain phase microscopy images. I developed an experimental pipeline using measurement of reactive oxygen species genera=on as a label of neutrophil function. I generated a large neutrophil imaging dataset and explored different deep learning approaches to predict neutrophil activation state. Second, I developed machine learning models to predict sepsis patient future clinical score using electronic health records. As part of the effort, I developed a multidatabase extraction pipeline to facilitate electronic health records extraction process. My work demonstrates the potential of using deep learning models to evaluate functional aspects of the immune system and to predict sepsis patient future state, which could provide significant insight into sepsis prognostic monitoring and is easy to adapt in clinical settings. It is of great significance to understand the input data in developing reliable and generalizable machine learning for healthcare models. It is also increasingly apparent that machine learning for healthcare models can predict patient sensitive information from data that does not explicitly encode it. However, we lack a clear understanding of the extent of the problem: what types of sensitive information can be predicted and how it generalizes to different models or different datasets. We lack approaches to develop models that can make clinical inferences but not infer sensitive information. Critically, we also lack approaches to explain such data encoding. Using electronic health records, I thoroughly investigated the ability of machine learning models to encode a wide range of patient sensitive information. I developed a strategy to ensure that clinical prediction is minimally based on patient-sensitive information. I presented an approach that can explain feature importance in patient sensitive information encoding. This set of studies not only allows us to gain deep understanding of the sepsis patient clinical score prediction model but also are applicable to a variety of machine learning models utilizing time-series data.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156618</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cluster Analysis in High Dimensions: Robustness, Privacy, and Beyond</title>
<link>https://hdl.handle.net/1721.1/156617</link>
<description>Cluster Analysis in High Dimensions: Robustness, Privacy, and Beyond
Narayanan, Shyam
Cluster analysis focuses on understanding the cluster structure of data, and is perhaps one of the most important subfields in high-dimensional data analysis. Traditionally, cluster analysis focuses on partitioning data into closely related groups, such as in k-means clustering and learning mixture models. However, one sometimes overlooked part of cluster analysis is analyzing data from a single cluster: this encompasses problems such as mean estimation and covariance estimation, which correspond to learning the location and shape of a cluster, respectively. In this thesis, we study various classic problems in high-dimensional cluster analysis, relating to both identifying several clusters and learning a single cluster. We provide improved algorithms and lower bounds for problems including k-means and k-median clustering, Gaussian mean and covariance estimation, high-dimensional mean testing, and learning mixtures of Gaussians. Importantly, in this thesis we also focus on the socially motivated constraints of robustness, privacy, and explainability, and how they affect the complexity of these problems. In our quest to understand cluster analysis under such socially motivated constraints, we discover the first black-box transformation from robustness to privacy, as well as the first-known statistical separation between some natural models of robust statistics.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156617</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and Robotic Path Planning over High Dimensional Categorical Observations</title>
<link>https://hdl.handle.net/1721.1/156616</link>
<description>Inference and Robotic Path Planning over High Dimensional Categorical Observations
San Soucie, John Edward
Advances in marine autonomy, deep-learning, and in-situ marine sensing technology have enabled oceanographers to collect vast amounts of spatiotemporally-distributed, sparse, high dimensional categorical data. Statistical models, particularly in streaming and computationally constrained settings, have lagged behind data collection. Recent developments in topic modeling for robotics have highlighted the potential to efficiently extract meaningful relationships from categorical data, and adjust robotic path-planning based on real-time inference. This dissertation seeks to fill the gap in streaming statistical models for sparse, high-dimensional categorical data, in the context of open-ocean phytoplankton community ecology. We begin by exploring the use of existing topic modeling approaches for plankton community characterization. Topic models are compared to standard ecological techniques for dimensionality reduction. The increased fidelity and expressiveness of the topic modeling approach allows for greater resolution of plankton co-occurrence relationships. By analyzing these relationships and ocean physics in and around a retentive eddy, the source of phytoplankton variability is traced to storm-driven advection on the ocean surface. We conclude that topic models offer unique insights into the causal mechanisms underlying plankton community variability. Next, we turn our focus to the development of a streaming belief model for categorical path planning. Such a model must be capable of predicting in regions without data, and it must be able to process streaming data in a computationally efficient manner. We introduce the Gaussian Dirichlet Random Field model, a novel topic model with spatially continuous latent log-probabilities. In addition to producing a more accurate model than the state-of-the-art in locations with data, the Gaussian Dirichlet Random Field model can interpolate and extrapolate. The model is initially presented with a batch hybrid Markov Chain-Monte Carlo inference procedure. We develop a streaming fully-variational inference approach for inference, called Streaming Gaussian Dirichlet Random Fields, which satisfies both the prediction and efficiency requirements for path planning belief models. In-silico experiments demonstrate the ability of this model to accurately map latent co-occurrence patterns. Comparisons to a standard Gaussian process on both path-planning tasks and observation mapping tasks show how the ability of Streaming Gaussian Dirichlet Random Fields to leverage additional categorical observations enables superior performance.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/156616</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications</title>
<link>https://h